“I bet you a thousand pounds you don’t win a majority,” crowed the BBC’s Chris Smith, interviewing the Prime Minister, ahead of the election. The source of Mr Smith’s cockiness? Opinion polls, all of which agreed that a hung parliament was a racing certainty. Labour were most likely to be the largest party.
The episode was out of place yet commonplace on the BBC (has Smith coughed up the grand yet?), but such conduct is so visible that it’s easily “corrected for” (by John Whittingdale, one hopes). However, Mr Smith isn’t our principal concern. Not today.
More worrying is the source of the broadcaster’s confidence: that endless sequence of opinion polls, nearly all of which (apparently) under-estimated Conservative support. This sort of bias is more troubling than the BBC variety.
Because unless a sociologist can prove that repeated exposure to surveys which underestimate Tory support does not have an impact on the choices we make when finally alone in the polling booth – and I do not believe such proof could ever be gathered – then this bias is a democratic problem.
We are faced with two hypotheses, which need not be mutually exclusive, of course. The first is that the polls were accurate “snapshots”, and as Lord Ashcroft says: snapshots should not be used for prediction.
What happened, according to the Swing Hypothesis, is that in between the very last such snapshot, at the end of a series lasting for years – in the gap between falling from the horse, and hitting the ground, as it were – the voting behaviour of the British public changed, significantly enough that “Tory majority” rather than “Labour-largest-party” became the outcome. Ahem.
The Swing Hypothesis looks at the bewildered faces of Harriet Harman and Paddy Ashdown, confronted with the exit poll (a little more on this, later), and says, effectively: “Sorry. It’s not my fault if you use an estimate of party support taken at 0659:59, and use this to predict party support, integrated over the time interval 7am-10pm on May 7th. Estimates of now aren’t predictions of then.”
All statisticians know this, but one wonders why people are so eager to conduct and publish opinion polls if their predictive value is void. My personal belief in the Swing Hypothesis is next to nil.
The other hypothesis, of course, is that the polls were wrong, and repeatedly wrong for a very long time.
Now, all polls are wrong by definition: any estimate of a “thing” can only ever by chance actually be that “thing”. All newspapers, at least since 1992, have decorated articles on polls with their “margins of error”, and though I’ve never heard a commentator describe such “margins” correctly, there is a widespread intuitive understanding that they represent, in some sense, the precision of the estimates. “The Tories have 33 per cent support, plus or minus three per cent.” (It’s not obvious what this means by the way: I ask every candidate I interview to define such a “confidence interval” for me. You’d be astonished at how few manage to do so successfully.) We have a comfortable rule-of-thumb familiarity with these margins.
If 39 people in a sample of 100 support the Conservatives, you have an intuitive understanding that this is likely to be less precise than a similar survey, which recorded (say) 3100 Conservative supporters in a survey of 10,000 voters. We’d believe that the 39 per cent from the first poll is less likely to be accurate than the estimate of 31 per cent from the second.
But – outside of the proclivity of BBC interviewers – we never discuss bias. Statisticians know that all estimators of “things”, whether the effect size of a drug in a clinical trial or, indeed, the proportion of the population who intend to vote Conservative, have two properties: the precision with which the truth is approximated, and a bias; that is, how far from the truth, on average, are the estimates, even were the sample size of a survey allowed to grow enormous.
Let’s try and make this as intuitive as the idea of precision. Suppose I tell you that the first, smaller sample was taken completely at random: 100 voters chosen randomly from the population, all of whom volunteered spontaneously to tell me how they voted.
In the second, much larger poll, however, the sample was selected by sticking a pin in a map of Ayrshire, visiting the closest town, standing in its centre in the middle of the day, and shouting “Any fool gonna vote Tory?” at the first 10,000 men who pass by.
Such a method for estimating nationwide Tory support is likely to be biased, I think you’ll agree, even if it’s repeated, even if it surveys 100,000 voters. A biased estimate has the following characteristic: that even as it becomes more precise, it also becomes more precisely wrong. (Bear that in mind, the next time you read stories predicated upon a “poll of polls”, because this defect afflicts such objects a fortiori.)
Now pollsters are clever people, and don’t conduct their surveys by yelling at their sample subjects, and, in fact, spend a lot of time trying to remove the bias from their samples. But biased their estimates remain, because no opinion poll is constructed from a random sample (this is why the exit “poll” is better: it’s a truly random selection from the exact target population, people who actually voted.)
(Nor is bias always the enemy of good estimation. Bayesian statisticians, <cough>, will introduce a little bias into their estimates in order to score a big hit on precision. But I’m digressing.)
The alternative to the Swing hypothesis is the Bias hypothesis: that despite the best intentions of the pollsters, their methods systematically under-estimate the true level of Tory support. There’s a very long, very good empirical examination of this hypothesis here.
The Bias hypothesis seems more credible than the Late Swing one, though we only care when the actual result (who wins?) is qualitatively different (not who we expected to win!) to the estimated one.
Consider the danger, were the Bias hypothesis to be true: opinion polls dominate political coverage; they may well affect the choices we finally make in the polling booth (unless we’re unaffected by that political coverage); they are consistently wrong. You don’t have to be a shy Tory to find this unsatisfactory; an angry one, maybe.
Clement Attlee wrote to Harold Laski in 1946, to tell him that “a period of silence on your part would be welcome.” I’d suggest that until the pollsters can explain why their products appear to under-estimate Conservative support – without the cod-psychology rubbish, suggesting the fault lies in the shame of the Conservative voter – they should consider Attlee’s advice. At the very least this would clear acres more space in the newspapers to discuss politics, rather than bar-charts.
And maybe we should consider a period of poll-purdah before a national vote. I’d rather know nothing of your voting intention, than be told, authoritatively and repeatedly, that your intention is known quite precisely. That is: quite precisely wrong.