The Guardian view on polls: understanding failure doesn’t guarantee future success
Version 0 of 1. Almost a year has passed since opinion polling suffered its worst reverse in living memory. Election night, 7 May 2015, stays in the mind for many reasons, but most of all for the unexpected and startling 10pm exit poll that predicted a Conservative majority after weeks of polling from every organisation had anticipated a hung parliament. Today the nearest thing to a definitive explanation is published. It is the result of months of careful analysis by an independent panel of academic experts in statistics and research methodology, and it concludes that all the polling organisations made the same mistake: their raw material, the population sample that underlay all the rest of their work, failed to represent reality. Politics – both politicians themselves, and voters – needs polling. It is most important at election time, when a sense of the shape of the future informs the debate about the present; if the polls are wrong, the debate is out of focus. What, we knew on 8 May, did the question of how Labour would handle the SNP matter, when the future actually involved a majority Tory government committed to shrinking the state? This May and June, too soon for the recommendations in today’s report to take effect, campaign strategy and reporting will be calibrated on unpublished private polls with a whiff of the kind of groupthink that misleadingly anticipated a Ukip surge in December’s Oldham West byelection, a seat comfortably held by Labour. Working out how to get the result of the last election right won’t guarantee accuracy in the next one. Led by Patrick Sturgis, professor of research methodology at Southampton, the panel had already concluded in its interim report that the pollsters failed because they hadn’t reached enough Tory voters. That sounds less important than it is: it means that the way pollsters analysed and weighted their samples was not seriously flawed. But they began with a flawed impression of reality. The errors in their basic sample were shown up by the much more extensive British Electoral Study post-election survey based on 3,000 face-to-face interviews. Talking to people in their homes revealed that those who were hardest to reach were most likely to vote Tory. But this is too costly and time-consuming an exercise for regular pollsters and the newspapers that fund them. The Sturgis review recommends that the BES conducts a pre-election attitudes survey to give polling organisations a genuinely contemporary view of who thinks what. It is a good solution, but not a perfect one. Knowing what someone thinks now is not the same as knowing how they will vote next time. Pollsters tend to base predictions on past behaviour. When, as in 2015, the political landscape has been transformed over a single parliamentary term, the past becomes a much less reliable indicator. The rise of the nationalist parties and Ukip meant there was an unprecedented degree of fragmentation, while one of the main parties, the Lib Dems, had gone from being at least partly the home of the protest vote to a party of government. The BES post-election survey got the winner right but it got the down-the-field results, where other pollsters had been broadly accurate, very wrong. There is a lot for everyone who cares about polling to absorb here. But day-to-day polling will always be a slightly blurred snapshot of a moment in time: usually interesting, never definitive. |