Andrew Percy is the Member of Parliament for Brigg and Goole.
Last month, the anti-racism organisation Hope Not Hate revealed the dirty little secret shared by social media companies across the digital world. Not Facebook, Twitter or Telegram, not TikTok, 4Chan or Instagram, not Reddit, YouTube or Parler but all of them. The whole lot. They all harbour anti-Jewish racism. They provide platforms for hate, and whether they are good at enforcing community standards, or bad, they all bear some responsibility for spreading hate across Britain and beyond.
To some extent, what Hope Not Hate report is akin to an anti-Semitic wildfire, where Jew hate is not only present, but also broadcast, and viewed millions of times. Frighteningly, anti-Semites aren’t just present but learning from one another, increasing the speed and efficacy of racist transmission. The report reveals that faux academic discourse is being ditched, and in its place, supposedly humorous memes are being used to hook onlookers. This is a form of cartoon clickbait which radicalises and incites people to do harm.
When we talk about online hate, we are not talking about people having mild disagreements, or being slightly uncivil towards one another. What we are talking about is a very real and very dangerous threat where people who want to harm others are allowed to post violent content, recruit followers and mobilise – spreading hate far beyond anything any sane person would be willing to tolerate.
In an open letter to Telegram, Hope Not Hate pointed out that the messaging app has been used to: organise and coordinate far right terror plots, recruit new members to white nationalist and anti-Semitic networks, radicalise young and vulnerable people and spread conspiracy theories. At the time of writing, Telegram is hosting users such as GhostEzra, an overtly anti-Semitic QAnon influencer with over 330,000 subscribers on the platform who runs what has been dubbed the “largest antisemitic internet forum” in the world.
Thus far, Telegram has not responded to calls for change, nor changed any of its policies, choosing instead to continue to host some of the most abhorrent content you can find on social media. These companies can no longer be trusted to regulate themselves, while we continue to wait for them to take action. The content we see on these platforms – sometimes legal, sometimes not – has real world consequences, inspiring violence against some of the most vulnerable people in our communities. The only way we will see meaningful change is if the government steps in.
Together with colleagues from across the political spectrum, I have been meeting these companies and others, challenging them directly and demanding they do better. We are told about policies, stakeholder engagement, and improved moderation arrangements, and this of course is to be welcomed. However, if the policies are absent, poorly enforced or fail to address the problem, as this new research makes clear is the case, then we know the companies cannot be trusted to act, and government and parliament must force them to do so.
The Online Safety Bill is already, in draft form, being reviewed by colleagues across several parliamentary committees. It will introduce a regulator and a series of duties of care so that companies no longer have an economic incentive to act, but a legal requirement. The Bill is a major improvement on the status quo, and ambitious when judged against many other global regulatory and legislative efforts.
Colleagues and I will however be working to ensure it is as strong as it must be to tackle this scourge of online anti-Semitism. The new report makes clear that Terms and Conditions when thorough and well enforced, can make a difference. To that end, we will want to see minimum standards in place for the Terms and Conditions that such companies are required to adopt.
It’s important to note that any platform hosting anti-Semitism is a problem. We are not just looking to address the spread of vile hate speech on larger platforms such as Facebook or TikTok, we are looking to make sure that all platforms, big or small, can no longer be allowed to spread their poison. Alongside my colleagues, I will endeavour to make sure that platforms aren’t let off the hook by the current categorisation clauses in the draft Bill.
We will also be working to ensure that the Bill remains resolutely systems focussed. That is, we will make it the company’s problem to deal with anti-Semitism, not yours or mine. We don’t want to have to report the same content over, and over again, we want all of the platforms within the scope of the legislation to make the requisite changes to their operating systems, so that the hate isn’t present, promoted and pervasive.
Whether it be Facebook whistle-blowers revealing money, not morals, is the main driver for decision making, reports like Hope Not Hate’s shining a light on racism across the social media spectrum, or users’ own experiences of online abuse, there is no hiding it anymore. The secret is out, now it’s time for social media companies to face the consequences of their actions.
Andrew Percy is the Member of Parliament for Brigg and Goole.
Last month, the anti-racism organisation Hope Not Hate revealed the dirty little secret shared by social media companies across the digital world. Not Facebook, Twitter or Telegram, not TikTok, 4Chan or Instagram, not Reddit, YouTube or Parler but all of them. The whole lot. They all harbour anti-Jewish racism. They provide platforms for hate, and whether they are good at enforcing community standards, or bad, they all bear some responsibility for spreading hate across Britain and beyond.
To some extent, what Hope Not Hate report is akin to an anti-Semitic wildfire, where Jew hate is not only present, but also broadcast, and viewed millions of times. Frighteningly, anti-Semites aren’t just present but learning from one another, increasing the speed and efficacy of racist transmission. The report reveals that faux academic discourse is being ditched, and in its place, supposedly humorous memes are being used to hook onlookers. This is a form of cartoon clickbait which radicalises and incites people to do harm.
When we talk about online hate, we are not talking about people having mild disagreements, or being slightly uncivil towards one another. What we are talking about is a very real and very dangerous threat where people who want to harm others are allowed to post violent content, recruit followers and mobilise – spreading hate far beyond anything any sane person would be willing to tolerate.
In an open letter to Telegram, Hope Not Hate pointed out that the messaging app has been used to: organise and coordinate far right terror plots, recruit new members to white nationalist and anti-Semitic networks, radicalise young and vulnerable people and spread conspiracy theories. At the time of writing, Telegram is hosting users such as GhostEzra, an overtly anti-Semitic QAnon influencer with over 330,000 subscribers on the platform who runs what has been dubbed the “largest antisemitic internet forum” in the world.
Thus far, Telegram has not responded to calls for change, nor changed any of its policies, choosing instead to continue to host some of the most abhorrent content you can find on social media. These companies can no longer be trusted to regulate themselves, while we continue to wait for them to take action. The content we see on these platforms – sometimes legal, sometimes not – has real world consequences, inspiring violence against some of the most vulnerable people in our communities. The only way we will see meaningful change is if the government steps in.
Together with colleagues from across the political spectrum, I have been meeting these companies and others, challenging them directly and demanding they do better. We are told about policies, stakeholder engagement, and improved moderation arrangements, and this of course is to be welcomed. However, if the policies are absent, poorly enforced or fail to address the problem, as this new research makes clear is the case, then we know the companies cannot be trusted to act, and government and parliament must force them to do so.
The Online Safety Bill is already, in draft form, being reviewed by colleagues across several parliamentary committees. It will introduce a regulator and a series of duties of care so that companies no longer have an economic incentive to act, but a legal requirement. The Bill is a major improvement on the status quo, and ambitious when judged against many other global regulatory and legislative efforts.
Colleagues and I will however be working to ensure it is as strong as it must be to tackle this scourge of online anti-Semitism. The new report makes clear that Terms and Conditions when thorough and well enforced, can make a difference. To that end, we will want to see minimum standards in place for the Terms and Conditions that such companies are required to adopt.
It’s important to note that any platform hosting anti-Semitism is a problem. We are not just looking to address the spread of vile hate speech on larger platforms such as Facebook or TikTok, we are looking to make sure that all platforms, big or small, can no longer be allowed to spread their poison. Alongside my colleagues, I will endeavour to make sure that platforms aren’t let off the hook by the current categorisation clauses in the draft Bill.
We will also be working to ensure that the Bill remains resolutely systems focussed. That is, we will make it the company’s problem to deal with anti-Semitism, not yours or mine. We don’t want to have to report the same content over, and over again, we want all of the platforms within the scope of the legislation to make the requisite changes to their operating systems, so that the hate isn’t present, promoted and pervasive.
Whether it be Facebook whistle-blowers revealing money, not morals, is the main driver for decision making, reports like Hope Not Hate’s shining a light on racism across the social media spectrum, or users’ own experiences of online abuse, there is no hiding it anymore. The secret is out, now it’s time for social media companies to face the consequences of their actions.