back to blog list

The polls weren’t ‘wrong’. But the interpretation was

The UK experienced a shock last Thursday night: the Conservatives won a majority in the General Election. For months, the opinion polls of the most respected names in market research had been predicting a hung parliament. So much so, most of the pre-election talk was of red lines – those things that wouldn’t be negotiated away in post-election coalition negotiations.  How had the pollsters got it so wrong?

Not only did we wake up the next day with a Prime Minister many weren’t expecting. We also felt like weeks of political polling, widely reported in the media, had been feeding us a lie. Or had it?

A huge amount of coverage in the past few days has been devoted to how the polling was “wrong”. Polling company bosses have been queuing up to apologise. But they needn’t. What happened on Thursday night just reminded us of all the things we’ve learnt in the past few years about how people make decisions.

We should apologise for poor interpretation, not for the polls themselves.

For me, there are three main lessons:

1. Research is most accurate when it’s closest to whatever it’s measuring

The exit poll was actually remarkably accurate. It was conducted face-to-face as people left polling stations.  It predicted the Tories as the largest party with 316 seats. Not far off the 331 they won and quite reasonable given how tight some of the more marginal seats were.

Survation conducted a poll on Wednesday afternoon before voting began the next morning. It wasn’t released by the company (but that’s an argument for another day). It too predicted the share of the vote very accurately (37% Conservative and 31% Labour). It used exactly the same method as previous polls and suggests there may have been a very late swing to the Conservatives. Not that surprising given 18% of voters made up their mind on polling day (Survation).

Both of these polls were close to the behaviour they were measuring. One, hours before the vote opened and the other, minutes after voting.

We know that the closer we get to a real life situation, the more accurate the feedback we get. If I tell you what I’m going to eat for lunch next Tuesday, it will probably be “wrong”. I’m not suggesting people don’t know how they’ll vote. They just don’t know with absolute certainty how they’ll vote even a little way into the future. In a tight election, this uncertainty matters.

Perhaps the lesson here is that pre-judging an election weeks or months before is unwise. Early polls should only have been used as a guide and more heavily caveated by those producing them.

We know that people are unreliable predictors of their own behaviour, that decisions are full of biases, and that using multiple methods and sources of data gives the best overall picture

2. Behavioural economics applies to all decisions, including voting

For some time, the market research world has been grappling with how contradictory people are. What they say is not what they do. We know from behavioural economic theories that, when making decisions, people are more attached to what they already have (the endowment effect) and the fear of losing something is stronger than the draw of something newer and shinier (loss aversion).

For some reason, we didn’t apply this thinking to our political polling. To vote for a change in how the country is governed, people would have needed a much stronger pull from Labour than the level-pegging suggested in the polls. So of course the nation opted for the incumbents and the Tories won. Loss aversion won.

3. Using just one method doesn’t give the whole picture

All polls in the UK, whether face-to-face, online or telephone use roughly the same question. Something along the lines of, “who would you vote for if there were a general election tomorrow?”  Each asks this question of a representative cross-section of society. Unsurprisingly, all polls gave a similar result. They were measuring the same thing.

But where else would you make such profound judgements based on a single piece of information? To understand how many might buy a new product, a business wouldn’t just rely on how many said they’d buy it in a survey.  They’d look at previous years’ sales data, at consumer trends, at how many similar products their competitors were selling and at what people were saying about their company.  Only then would they build a complete picture.

The big mistake of all commentators was to hold up the opinion poll as sacrosanct at the expense of any other sources of data. We know from previous elections that incumbents perform better on polling day, how opposition parties need bigger margins earlier in campaign, and how important leaders’ personality ratings are. All lessons were there (explored in a lot more depth in this article by Shaun Lawson). We ignored them.

Some final thoughts

The market research and opinion polling community is rightly questioning its role in this election. The polls weren’t “wrong”. But the interpretation was. We know that people are unreliable predictors of their own behaviour, that decisions are full of biases, and that using multiple methods and sources of data gives the best overall picture of an issue. For some reason, in this election, we forgot this. We won’t next time.

Get in touch

 +44(0) 7980 988 762

Or find us at Wrap, near Brighton station