Caroline Elsom is a Senior Researcher at the Centre for Policy Studies.

It’s almost eighteen months since the Government laid out its roadmap for tackling online harms. The Online Harms White Paper’s mission was to make the UK the safest place to go online, and the best place to grow a digital business.

This is a laudable aim, yet the plans outlined tell a different story. They set us on a course that will seriously threaten our freedom, privacy and competitiveness – while being unlikely to make us any safer.

This Conservative Government has enacted unprecedented curbs on private life to combat Covid-19. It can ill-afford to commit a potentially embarrassing unforced error that will shrink the private sphere in the UK and might damage both the Party’s and the country’s reputation for openness, tolerance and freedom.

It is time to reflect on the purpose and design of any new regulation. My new report for the Centre for Policy Studies, Safety without Censorship, uncovers the extent of the flaws in the system proposed for Ofcom to regulate online content.

The report offers an alternative model, clearly differentiating Ofcom’s powers over illegal content from its remit over legal speech that may be considered by some to be harmful.

If it the Government chooses to ignore the warnings, it appears almost inevitable that the eventual Online Harms Bill will meet a similar fate to that of France. In June this year, the French Constitutional Council found large parts of France’s online hate speech legislation (known as the Loi Avia after National Assembly member Laetitia Avia, who drafted it) unconstitutional. Among other reasons, it breached a basic legality test for impermissible vagueness.

According to Article 10 of the European Convention on Human Rights and the UK’s own Human Rights Act 1998, restrictions to free of expression have to be ‘prescribed by law’ and ‘necessary in a democratic society for a legitimate aim’. This means, for example, that users must be able to reasonably foresee whether the platform will be legally obliged to remove content they are about to post.

In France, it was judged that platforms’ obligations were not laid out in clear enough terms that allowed the scope of liability to be determined. Given how alike the principles in the Loi Avia and the White Paper are, similar judicial challenges would likely be mounted in the UK on human rights grounds right from the start.

Even if the eventual Online Harms Bill manages to clear this hurdle, it could fall down on other legal grounds. The Loi Avia gave powers to France’s Higher Audiovisual Council to require hosts to remove the most extreme content (certain terrorist content and child sexual abuse imagery) within an hour.

For other content, deadlines of up to 24 hours apply, depending on who has made the request, the nature of the content and what sort of site it is on. This failed the requirement of necessity and proportionality, because of the powers it gave to an administrative authority rather than allowing host sites to obtain a judicial ruling on the matter.

The UK Government’s proposals are likely to run aground on both of these aspects of legality too. If anything more so, as the White Paper expands much further into the terrain of ‘legal but harmful’ than the Loi Avia. The Foundation for Law, Justice and Society rightly point out that the White Paper leaves open the possibility that constraints on free speech could be imposed ‘on the basis of opaque agreements between platforms and politician’ rather than being subject to the constraints of parliamentary debate.

If this fundamental principle is to change, it will involve amendment or repeal of the Human Rights Act, requiring the full legislative scrutiny of Parliament. Even then, it could still be defeated under principles of freedom of speech under English common law. Extreme caution should be exercised in going down this route to tackle online harms so as not to erode important checks against creeping censorship.

There are also issues of fairness in leaving all these crucial issues to be decided by a series of court battles. Tech giants with entire legal departments can mount these legal challenges with relative ease, but smaller companies seeking to dispute the rules do not have the resources for a fair fight.

This has already happened with large companies who have been subject to fines from the Information Commissioner’s Office. Last year, Facebook appealed against an ICO fine of £500,000 on the grounds of bias and procedural unfairness.

The ICO was forced to settle this out of court, allowing Facebook to avoid admitting liability. Much larger cases against the likes of British Airways and Marriott Hotels have been delayed as it appears the ICO is unable or unwilling to defend its position against the legal firepower large corporations are able to bring to the fight.

It is likely that two years will have passed before the Online Harms Bill makes progress through the House. Making crucial changes now could avoid many more years of legal disputes dooming this policy to failure.