Danny Stone MBE is the Chief Executive of the Antisemitism Policy Trust
Last week, Toby Young and I spoke to a group of Conservative MPs about the Online Safety Bill. You’d expect that the two of us would have extremely different views on the nature of the Bill, but I was surprised at the level of agreement we shared in different areas of it, which leads me to think that some of the concerns raised about freedom of speech in relation to the Bill are more about prioritising issues than underlying principles.
First, it’s important to have a shared understand what the Bill actually does. It does not require material be deleted unless it is illegal. When it comes to legal but harmful content, only the largest companies are required to act. The Bill says these larger companies must risk assess for legal but harmful content and develop Terms and Conditions to address it.
However, ‘takedown’ – that is removal of content – which is mentioned only once – is one of a number of measures that a company can deploy in dealing with legal harms. There is no duty to remove content or to censor legal speech, and no penalty associated with failure to take action, so long as companies are consistent in their approach.
If the Bill became an Act, as drafted, platforms would detail the risks associated with their business, choose their harmful content policy and enforcement mechanisms, and be required to deploy these consistently. We, as adults, could then make an informed, risk-based choice on the services we wish to use.
This is a systems-based approach, not focusing on the content but the delivery of it and systems responses to it. As for the harms themselves, they will no longer be determined by Silicon Valley, but instead in Westminster.
The Bill doesn’t require action on offensive speech, despite the fact that we go further in limiting expression in other areas. Ofcom is required in its regulation of our TV and radio content to set “generally accepted” standards to protect the public from “offensive & harmful material”; the Prevent programme involves assessment of legal speech when addressing extremism. Even in parliament, MPs aren’t allowed to call one another liars in the chamber.
Parliamentarians have long, therefore, considered that some legal speech must be addressed. National polling shows the public support this approach. However, the Bill focuses in on the systems.
The systems behind what we see on user-to-user services arise from active choice. They aren’t neutral with a default guarantee of freedom, they amplify or minimise harms, including antisemitism. Though social media is free to use, and a ‘leveller’ in some respects, platforms provide the opposite of the inclusivity, good faith and discourse ethics central to Haberman’s public square concept.
The owners monetize and monopolise our expression data. Platforms gamify discourse, using human psychology to enable traffic hungry design, driving people to increasingly inflammatory content.
Rules are made and applied inconsistently, without transparency. In a real town square we don’t find 20,000 people shipped in by the council to attack the speaker whilst others are doing a deep dive on their background, in a bid to share their address endanger their family and damage their employment prospects.
However, we do find no trip hazards or over-amplified PA systems in this Bill. The public square has been subjected to disruption, and companies – and let us not forget, they aren’t opposing the Bill – are crying out for regulation, having had nearly two decades to get it right themselves. They have failed to deliver unshackled speech online; the terms of engagement are unequal.
That inequality is often not considered in debates about the Bill. When speaking to the MPs, I argued that we define free speech too narrowly. Online harm leads to departure and isolation from online spaces. The great focus is censorship when, in fact, so many are already supressed, or marginalised.
Research shows individuals avoid political discourse for fear of harassment. Moreover, the spaces we inhabit are already far from diverse. The Pew Research Centre, for example, found the most active 25% of Twitter users produce 97% of all tweets.
In these potentially monotone spaces, the distribution of legal harms is sometimes encouraged. Let us not pretend this is simply offensive speech. Praise for terrorists when it doesn’t directly encourage violence is a legal harm. Making your profile name ‘gas the k**es’ is also legal but harmful, and denying the Holocaust and pseudo racial science is too.
This type of material can, and has, led to offline harm. The mass killer responsible for the murders in Buffalo, New York, talked about the way in which he had been radicalised by material online. Is it any wonder so many voices stay away from these horrible online spaces?
A systems-based approach, which the Bill aspires to, is more likely to be free speech preserving, and centralise democratic rights. Legally harmful content should be viewed as a design problem. It’s not that it cannot be there or must be deleted – though I obviously have strong views about whether it should be – but rather that platforms should be incentivised to ensure their systems minimise its promotion or ask users to stop and think.
The Bill creates a regulated marketplace of harm. It doesn’t force legal content removal – the penalties for posting such content are determined by social media companies, not the state.
Both Toby Young and I had concerns about the Bill. We were worried that the Secretary of State (rather than parliament) is given too many powers to dictate what legal harms are, about the degree of detail being left to secondary legislation, and about platform inconsistencies in takedown (in relation to free speech and harmful materials respectively).
However, my contention was that the aspect of the Online Safety Bill requiring action on ‘legal but harmful’ materials addresses longstanding market failures. My focus is on the harm caused by platforms. His appeared to be ensuring free speech is not damaged through acting on such harms.
The systems focus should enable us to reach a happy medium, in my view, one not bogged down in discussion of content. The debate helped me to feel reassured this is viable. The legal but harmful provisions simply recognise a reality which if ignored, will require future legislation anyway, when the next round of system design flaws lead to offline harm.
Danny Stone MBE is the Chief Executive of the Antisemitism Policy Trust
Last week, Toby Young and I spoke to a group of Conservative MPs about the Online Safety Bill. You’d expect that the two of us would have extremely different views on the nature of the Bill, but I was surprised at the level of agreement we shared in different areas of it, which leads me to think that some of the concerns raised about freedom of speech in relation to the Bill are more about prioritising issues than underlying principles.
First, it’s important to have a shared understand what the Bill actually does. It does not require material be deleted unless it is illegal. When it comes to legal but harmful content, only the largest companies are required to act. The Bill says these larger companies must risk assess for legal but harmful content and develop Terms and Conditions to address it.
However, ‘takedown’ – that is removal of content – which is mentioned only once – is one of a number of measures that a company can deploy in dealing with legal harms. There is no duty to remove content or to censor legal speech, and no penalty associated with failure to take action, so long as companies are consistent in their approach.
If the Bill became an Act, as drafted, platforms would detail the risks associated with their business, choose their harmful content policy and enforcement mechanisms, and be required to deploy these consistently. We, as adults, could then make an informed, risk-based choice on the services we wish to use.
This is a systems-based approach, not focusing on the content but the delivery of it and systems responses to it. As for the harms themselves, they will no longer be determined by Silicon Valley, but instead in Westminster.
The Bill doesn’t require action on offensive speech, despite the fact that we go further in limiting expression in other areas. Ofcom is required in its regulation of our TV and radio content to set “generally accepted” standards to protect the public from “offensive & harmful material”; the Prevent programme involves assessment of legal speech when addressing extremism. Even in parliament, MPs aren’t allowed to call one another liars in the chamber.
Parliamentarians have long, therefore, considered that some legal speech must be addressed. National polling shows the public support this approach. However, the Bill focuses in on the systems.
The systems behind what we see on user-to-user services arise from active choice. They aren’t neutral with a default guarantee of freedom, they amplify or minimise harms, including antisemitism. Though social media is free to use, and a ‘leveller’ in some respects, platforms provide the opposite of the inclusivity, good faith and discourse ethics central to Haberman’s public square concept.
The owners monetize and monopolise our expression data. Platforms gamify discourse, using human psychology to enable traffic hungry design, driving people to increasingly inflammatory content.
Rules are made and applied inconsistently, without transparency. In a real town square we don’t find 20,000 people shipped in by the council to attack the speaker whilst others are doing a deep dive on their background, in a bid to share their address endanger their family and damage their employment prospects.
However, we do find no trip hazards or over-amplified PA systems in this Bill. The public square has been subjected to disruption, and companies – and let us not forget, they aren’t opposing the Bill – are crying out for regulation, having had nearly two decades to get it right themselves. They have failed to deliver unshackled speech online; the terms of engagement are unequal.
That inequality is often not considered in debates about the Bill. When speaking to the MPs, I argued that we define free speech too narrowly. Online harm leads to departure and isolation from online spaces. The great focus is censorship when, in fact, so many are already supressed, or marginalised.
Research shows individuals avoid political discourse for fear of harassment. Moreover, the spaces we inhabit are already far from diverse. The Pew Research Centre, for example, found the most active 25% of Twitter users produce 97% of all tweets.
In these potentially monotone spaces, the distribution of legal harms is sometimes encouraged. Let us not pretend this is simply offensive speech. Praise for terrorists when it doesn’t directly encourage violence is a legal harm. Making your profile name ‘gas the k**es’ is also legal but harmful, and denying the Holocaust and pseudo racial science is too.
This type of material can, and has, led to offline harm. The mass killer responsible for the murders in Buffalo, New York, talked about the way in which he had been radicalised by material online. Is it any wonder so many voices stay away from these horrible online spaces?
A systems-based approach, which the Bill aspires to, is more likely to be free speech preserving, and centralise democratic rights. Legally harmful content should be viewed as a design problem. It’s not that it cannot be there or must be deleted – though I obviously have strong views about whether it should be – but rather that platforms should be incentivised to ensure their systems minimise its promotion or ask users to stop and think.
The Bill creates a regulated marketplace of harm. It doesn’t force legal content removal – the penalties for posting such content are determined by social media companies, not the state.
Both Toby Young and I had concerns about the Bill. We were worried that the Secretary of State (rather than parliament) is given too many powers to dictate what legal harms are, about the degree of detail being left to secondary legislation, and about platform inconsistencies in takedown (in relation to free speech and harmful materials respectively).
However, my contention was that the aspect of the Online Safety Bill requiring action on ‘legal but harmful’ materials addresses longstanding market failures. My focus is on the harm caused by platforms. His appeared to be ensuring free speech is not damaged through acting on such harms.
The systems focus should enable us to reach a happy medium, in my view, one not bogged down in discussion of content. The debate helped me to feel reassured this is viable. The legal but harmful provisions simply recognise a reality which if ignored, will require future legislation anyway, when the next round of system design flaws lead to offline harm.