Published:

Victoria Hewson is Head of Regulatory Affairs at the Institute of Economic Affairs.

On Tuesday, the Joint Committee on the Draft Online Safety Bill published its report on the government’s proposed legislation. The Committee spent several weeks taking evidence, and has produced a weighty report and a set of case studies.

The report makes a number of recommendations, comprising a complete restructuring of the Bill as it stands, as well as a number of technical changes. Although some recommendations are intended to protect journalism and free speech, many could have the reverse effect and make the Bill even more censorious and intrusive.

The committee wants more power for Ofcom across the board. But to accept this would surely be a tactical blunder by the Conservatives. Why would they choose to build up a powerbase of content moderation in a quango that has not shown itself to be a defender of freedom of expression in its existing remit, and where the prospect of a conservative-minded chair has previously been met with such horror?

The committee is also fully supportive of the identity-based features of the Bill, with a focus on women and minorities, and hate crimes, which also seem contradictory to efforts being made elsewhere in government to push back against divisive ideologies curtailing free speech and debate elsewhere, such as in universities.

The committee recommends mandatory codes of practice (to be produced by Ofcom) to curtail anonymity and make users more traceable, and against disinformation and other ‘societal’ and collective harms.

Ironically, and troublingly, the recommendations on disinformation seem to be based on disinformation being absorbed and propagated by the Committee itself, which cites ‘intensive care beds full of unvaccinated Covid-19 patients’ (an emotive and contestable assertion) as a ‘harm’ that resulted from ‘amplifying the false over the true’.

It also seems a stretch to blame ‘mass murder in Myanmar’ on ‘an unregulated internet’, which the report also does. Such ill-founded examples of ‘harm’ demonstrate again that the rush to regulate the internet seems more borne out of moral panic than evidence, either of the harm caused by digital platforms or the likely effectiveness of this hugely ambitious and complex legislation.

The report rightly notes the complexity and opacity of the Bill, but its recommendations seem likely to make this worse.

The committee’s preferred approach of overarching duties of care, based on negligence standards in common law, seems appealing, but in reality this duty of care model does not read across to the realm of social media and user to user interactions, as leading lawyers have explained.

The committee’s recommendation that the Bill could be simplified with reference to limited ‘core objectives’ is somewhat undermined by the fact that they came up with no fewer than seven such objectives, with freedom of expression, privacy and accountability worryingly low on the list.

Direct rights for users and a new ombudsman also have superficial appeal, but experience in other sectors such as financial services suggests that this too can introduce greater complexity and have negative unintended consequences.

In terms of the content that is to be targeted, the government had already indicated in its evidence to the committee that it intends to include ‘hate crime’ as priority content that will attract the strictest duties of filtering and removal under the Bill.

The committee warmly welcomes this and has pushed for the definition of hate crime to be as wide as possible, including not only underlying crimes such as harassment, aggravated by hostility towards a protected characteristic, but also ‘stirring up’ type offences where communication of material that is threatening, abusive or insulting can itself amount to an offence, irrespective of intent or actual harm.

So, while the committee rightly identified that the ‘legal but harmful’ category in the draft Bill was uncertain and threatened free speech, its proposed remedy, of focusing only on defined illegal content, does not seem like an improvement, and may even exacerbate the problem.

Taken together with the Law Commission’s recent proposals for reforms to communications offences and hate crime laws, which the committee draws support from, the scope of illegal content here could be very wide indeed and the pressure on platforms to remove and block content will be immense.

Be under no illusions, MPs want to control what we are allowed to see and share, and they want to deploy social media platforms as their proxies to do it.

Some of the recommendations in the report make the draft Bill look positively libertarian in comparison. This itself undermines one of the committee’s less illiberal recommendations – the establishment of a special Joint Committee to scrutinise Ofcom’s and ministers’ implementation and exercise of powers under the new law.

When MPs and peers have shown themselves to be ideologically committed to controlling speech, with the consequences for digital trade and innovation that will follow, it is not clear that they are best placed to monitor its implementation.

Ideally, the courts would have more of a role in reviewing the enforcement activities of Ofcom and ministers – by way of full, on the merits review, not the more deferential judicial review standard.

Forthcoming research from the IEA will put forward some further suggestions for safeguarding fundamental rights and freedoms, which will be needed if this chillingly authoritarian legislation is to proceed.