Dr Bryn Harris is the Chief Legal Counsel of the Free Speech Union
The Online Safety Bill has been laid before Parliament. Ministers, including Nadine Dorries last week, have worked hard to persuade voters that the Bill contains important safeguards for free speech online. Are they right?
Even those being generous would resoundingly answer ‘no’. The Bill is informed by a desire to protect freedom of speech, but largely does the opposite.
We should give the government its due. The Bill imposes free speech obligations on online providers where previously there were none. The big social media platforms will no longer have wholly free hands. They will be under free-standing obligations to implement processes that protect political speech (or ‘content of democratic importance’) and journalistic content. If they do not, users can complain and Ofcom can take action. This is a considerable improvement.
This Bill, however, fundamentally concerns the prevention of ‘harm’, not the protection of free speech (hence the name). When the ‘safety’ duties are engaged alongside the free speech duties, the balancing exercise will skew decisively towards harm prevention – concrete action must be taken in relation to harmful or illegal content, but social media companies are only asked to ‘have regard’ for free speech, which is the weakest of the legal duties.
The Bill thus enshrines in statute the illiberal approach all too familiar to the Free Speech Union, with free speech treated as an afterthought. The liberal philosophy of the English common law, with a starting point of the presumption of liberty, unless a specific rule says otherwise, is reversed. Online platforms will start by asking whether a user has harmed someone,. Only much later will they ‘have regard’ to that user’s freedom of speech.
The Bill has also become worse during its journey from a White Paper three years ago. Whereas the previous draft required platforms to ‘minimise’ illegal content, they will now have to ‘prevent’ users from encountering illegal content, where necessary by removing it.
This tougher duty will likely result in over-removal by providers, because risk-savvy provider, fearful of potentially huge fines (10% of a company’s annual global turnover) will be cautious. In cases where a free speech duty and a safety duty are competing, removing content that might be harmful will be the safer option – the free speech duty is weak and easily complied with (even with removed content) whereas complying with the safety duties requires action. The box-ticking requirement to ‘have regard’ thus imposes no effective deterrent against over-removal.
The duty regarding ‘content that is harmful to adults’ has also worsened. Providers will have four options in dealing with such content: removal, restricting access, preventing promotion, or actively promoting it. The liberal option – leave it be and let adults make their own choices – isn’t available. The only option that isn’t censorious – ‘recommend or promote content that you believe to be harmful’ – is so undesirable that no platform will choose it.
Nevertheless, a new clause on ‘user empowerment duties’ is welcome. It allows adults to choose whether or not they wish to be exposed to harmful content on sites like Twitter. But the choice is illusory and the reality is paternalistic – an adult won’t be free to see everything unadulterated, including the ‘harmful’ stuff, because platforms are virtually certain to remove, restrict or downgrade harmful content. Users will be free to choose, so long as they choose not to be ‘harmed’.
However, users will have a right to sue for breach of contract if providers remove or restrict content contrary to their terms of service. This should allow users to resist providers that fail to ‘take into account’ the protections for political speech and journalistic content. It remains to be seen if these duties will genuinely restrain the instinct to over-remove content.
Also welcome are new restraints on the Secretary of State’s power to dictate what kinds of content providers must police. The categories of ‘priority’ illegal content are now stated baldly by the Bill, and are what one would expect. When it comes to content that is harmful to adults, the Secretary of State will have the power to lay a statutory instrument specifying what lawful speech social media companies will be forced to remove. It remains to be seen how censorious Nadine Dorries will be, but even if she is relatively restrained, this Bill is a hostage to fortune. It empowers a future Secretary of State at DCMS to come up with their own Index Librorum Prohibitorum.
All analysis of the Bill is speculation: we’ll only know its impact once it becomes law, and providers and Ofcom begin to implement it. What is unusual is that ministers seem to be aware of the pressures that are likely to turn the Bill into a censor’s charter.
The Culture Secretary accepts that a culture of censorship already exists among the platforms whom she proposes to essentially entrust with deciding what to remove. Ministers seem to be aware that a repeat of the Trump Twitter ban would be disastrous. They must also know that the huge fines and even criminal sanctions that could be imposed under the Bill are virtually certain to drive excessive risk-aversion. So why is the Government introducing a Bill so likely to thwart freedom of speech?
I suspect the answer lies in an unwillingness to address a very difficult but fundamental conceptual problem – a government cannot protect free expression while also trying to prohibit harmful speech. To govern is to choose: ministers and lawmakers must show leadership and tackle the question of whether we should prioritise liberty or paternalism, or we will continue to muddle through a mess of contradictions.
Free people do not live their lives under a rulebook’s control, still less one which vexatious political activists will be able to weaponise. This Conservative Government should be true to its convictions and use this Bill to force the social media companies to do more to protect free speech.