Dr Moshe Kantor: Tech Giants “Shamefully Slow” To React To Antisemitism

featured in:

share the article

Dr Viatcheslav Moshe Kantor is President of the European Jewish Congress, a Brussels-based representative body of Jewish figures across the continent. He has been a vocal opposer of Google over controversial content aired on its YouTube website. Here he speaks to Red Herring about policing of the Internet, the importance of social media for hate speech and why Europe’s 2016 code of conduct should be more widely deployed.

Does the recent controversy surrounding antisemitic material online show that major web-based companies must submit to national and international laws when it comes to content on the web?

Companies like Google have been shamefully slow to react to blatant anti-Semitic and extremist content hosted on their sites.

A recent EU-funded survey found that only 40 per cent of all notifications of alleged illegal online hate speech are currently reviewed within 24 hours, the period of time set out in the European Commission’s Code of Conduct, which was voluntarily adopted by these leading web-based companies. This is unacceptable.

There is clearly a disparity in the way that web-based and traditional media are allowed to function. Governments are having to move fast to adapt to the changing patterns of hate crime and look at ways of putting pressure on platforms to effectively and speedily remove offensive material. Technology companies must recognise their responsibilities to society and take action.

Do you believe that multinational technology companies are sufficiently equipped to self-police the content put out on their platforms?

As the most powerful and successful technology innovators in the world, it is difficult to accept that these companies are unable to do more, and in a faster framework, to address the presence of anti-Semitic, illegal and hate-filled content on their own sites.

I would further call on them to refine their definition of appropriate and inappropriate content. Google’s advertising policy states they don’t allow the promotion of hatred, violence, religious or political intolerance, or organisations with such views in its ad network. However, they are still failing to remove anti-Semitic and extremist content, so clearly they have work to do to address this serious matter.

Has the web been the single biggest driving factor behind a rise in antisemitic, Islamophobic, homophobic and other hate speech proliferating in recent years?

I think the proliferation of social media has certainly made hate speech more prominent, but I do not think it is the root cause. It has though provided a platform for anti-Semites and extremists to promote their hatred.

What is clear is that existing legislation is ill-equipped to address the worrying and growing issue of online hate and incitement to violence. The web has traditionally fought hard for the protection of the right to freedom of expression, with many arguing that tackling online hate speech contravenes this essential freedom. But there is clearly a point where freedom of speech ends and incitement to violence begins. We must act to hold these web-based companies to account in policing and eliminating offensive material from their sites.

Have states done enough to stamp a legislative mark on online content? If not, what more could be done to ensure that online hate speech is denuded as much as possible?

Governments across Europe are taking this issue very seriously, and rightly so. The rapid escalation of hate speech on line, and the religiously-motivated violence we are seeing across Europe, means that our leaders have an important and critical job to do to help remove this evil from our society.

How fundamental do you believe the EU, the Digital Single Market and other multinational blocs are in tackling online hate speech?

In May 2016, the European Commission presented Facebook, Twitter and YouTube with a Code of Conduct on countering illegal hate speech online, which called on the companies to have clear and effective processes in place to review notifications of hate speech so they could aim to remove the majority of illegal hate speech within 24 hours. It further called on the companies to promote transparency in their commitment to the Code of Conduct, which all three companies voluntarily adopted.

The recent media and advertiser-led investigations have clearly exposed the failing of the technology companies to police the content on their sites and follow the European code of conduct to which they agreed. We now look to these companies to review the ways in which they are able to self-police their platforms and immediately address this critical issue.