The implications of Twitter banning the President of the United States’ account are big.  And those of Parler being forced offline by Google and Apple are even bigger.

On the decisions themselves, we will be brisk.  The barring of Donald Trump’s account was wrong.  There is a case for arguing that some of his tweets (during the run-up to last week’s riot in the Capitol, for example) were likely to incite violence.  The least bad solution to that would have been to delete the tweets, not ban the whole account.

Google and Apple’s initiative puts both in the position of being seen to line up with the Left against the Right.  It might not always be wrong to do so.  But many in the mainstream centre-right believe that the social media giants are not neutral arbiters.  That Apple rejected a content moderation proposal from Parler will fuel this conviction.

We will not trawl through the evidence today in search of a judgement.  Nonetheless, it is undoubtedly true that the culture of the big social media companies is broadly, in the transatlantic sense of the word, liberal.  With Twitter operating normally and Parler not doing so, Apple and Google will be seen by millions of Americans to be using their muscle to shut down free speech for one side only.

The implications are incendiary.  But none of the issues raised by either ban are new.  To stick with Twitter for the moment, its Trump decision does not suddenly confirm it as a publisher rather than a platform.  That has clearly been the case for years: like other social media sites, it bars terrorists and child abusers, for example; often, its ineffectiveness in doing so gives rise to complaints.

Perhaps it would be better to describe it as a network – the virtual equivalent of a vast city, rather than use analogies that glance back to the origins of Cambridge University Press (and beyond).  If so, it is a city full of noises: those of Russian bots and non-state terrorist actors, of mobs hunting down the next victim to “cancel”, of a myriad of fake accounts…and tyrants such as Xi Jinping (no ban for him).

Some argue that Twitter is a natural monopoly.  Others, that it isn’t – citing a mass of other social media outlets such as Facebook, Instagram or Pinterest.  But none of these occupy the position of Twitter in the public square as a forum for debate.  So it is a matter of public interest if that debate becomes slanted or skewed.

But though state action to regulate social media is unproblematic in principle, it’s deeply problematic in practice, and our old friend the law of unintended consequences applies.  For a start: which state should act?  Twitter is an American company operating internationally – like most of the social media behemoths to date, though Tik-Tok reminds us that China is getting in on the act.

Europe, significantly, isn’t – though the European Court of Justice has ruled that Facebook must take down libellous content not just in Austria, but globally.  But countries have different definitions of libel, even within the EU itself.  So the same standard wouldn’t always apply. (We pass over the applicability of the ECJ’s judgements in say, Peru or, er, here).

A libel is clearly an online harm.  But defining such harms and then regulating them are only one aspect of the issues thrown up by social media.  There are others – such as promoting competition and protecting privacy.  These can clash: for example, competition favours moving data between different providers with as little friction as possible. Privacy doesn’t.

Insofar as there can be regulation in one country (and the way the world is going suggests, as so often elsewhere, a future of overlapping sets of different rules), there seem to be two broad roads that Britain could take.  The first is that we seem to be going down: regulation by Ofcom.

At which point, some of the difficulties bound up regulation become clearer.  The idea of an Online Harms Bill was powered by the haunting suicide of Molly Russell, a 14 year old girl who committed suicide in 2017 after seeing images of self-harm on Instagram, Snapchat and Whatsapp.

The Bill was unveiled in 2019, two years after that event, and then delayed.  Last year, Caroline Dinenage, then a Minister at the Culture Department, said that it would not be brought before Parliament to the end of this year.  The delay may largely be explained by the problematic proposal at the heart of the Bill: action when content may be legal but nonetheless harmful.

That would give Ofcom wide powers – together with the capacity to levy fines of up to £18 million or 10 per cent of global turnover and, ultimately, to block services from the UK altogether.  Could the regulator be trusted to act sensibly with these powers were a Corbyn-like government in power?  Could it be trusted at all?

Would not the big boys take the fines on the chin, while new entrants to the market were frozen out – with less resouces, too, for court battles?  What about “hate speech” – in a country in which the speech to a Conservative Party Conference of a recent Home Secretary can be recorded as a “hate incident”?

The second route would be that of self-regulation – as with the press.  To say that this is controversial where those newspapers are concerned would be an understatement.  But it would come with a sting in the tail, at least if some politicians have their way (and courts allow): that of the power to sue Twitter itself, say, if it carried a libel, rather than just the person who wrote it.

The best-known example of present practice is the legal action brought by the late Alistair McAlpine, often said to be Margaret Thatcher’s most effective fundraiser, who successfully sued Sally Bercow for issuing a defamatory tweet about him.  If the publisher analogy applies, Twitter should also have been held responsible.

There is a danger of putting the cart before the horse – that’s to say, assuming social media is the sole driver of desocialisation, rather than also looking at other causes, such as a development that, on balance, brings more gains than losses: more TV channels.  But both old media and new can help to form what our columnist Neil O’Brien calls “communities of radicalisation”.

Benedict Evans compares regulating social media to regulating car use: “we can tell car companies to make their cars safer, and punish them if they cut corners, but we can’t tell them to end all accidents or make gasoline that doesn’t burn”.  In other words, good solutions to problems will be gradual and diverse, just as different laws and agencies deal with speed limits, car crime and congestion.

A happy ending would see regulators and social media providers in dialogue, with the former nudging the latter to behave more responsibly – for example, not loading their algorithms so that viewing extremist content becomes a gateway to viewing yet more extremist content.  The less happy conclusion sees the kind of user anger inherent in the Parler ban swell into more protest, alienation and violence.

The dystopian endgame would see either, on the one hand, social media Jupiters crashing wherever they will – squashing new market entrants, indifferent to the public good, and leaving a trail of bias in their wake; or, on the other, unaccountable bureaucracies and despotic regimes using law as a club to beat freedom into a pulp.  How confident can we be of the happy endiing rather than the other?