The allegation this week (8 June) that YouGov, in the midst of the 2017 general election campaign, didn’t publish a post-debate poll because it was too favourable to Jeremy Corbyn and the Labour Party was a serious one to make. Chris Curtis, a pollster who worked for YouGov at the time, has since partly retracted his claim, saying he accepts that the poll wasn’t published due to disagreements among YouGov’s team over the quality of its methodology.
For enough people of a certain political persuasion, however, the damage has been done. It has convinced them of their worst fear: that the polling industry publishes data selectively to suit a certain narrative.
The discussion over the poll in question is an interesting one but I don’t suspect we’ll ever receive an answer that satisfies everyone. YouGov had a poll that some of its team thought used a sub-par methodology. Others disagreed. That’s it. That applies to every data team out there but political polling attracts political figures and people with partisan prejudices, and it’s inevitable that there will be claims made of party political interference. The coverage of the controversy does, I think, offer an opportunity to discuss a problem with British polling more generally: its unhealthy fear of outliers.
I’ve been a poll watcher for ten years now, co-founding Britain Elects in 2013 and running it ever since. In my experience, British polling is among the best in the business for its methods and transparency, certainly better than much of what we see in the US. Indeed, polling is getting more accurate with every decade. It is a trade that has form for correctly measuring public opinion.
But like any industry dependent on income, pollsters are after street cred, too. They’re after a reputation for being as accurate as possible and better than the rest. Political polling doesn’t make as much money as some think – it certainly makes much less than client-based business polling – but it has a profound influence on perceptions of a company’s reliability.
At the 2015 general election the UK polling industry messed up – big time. While most published polls pointed to a dead heat between Labour and the Conservatives, the actual results put the Tories six percentage points clear. This failure did a lot of damage to the reputations of pollsters. Most sought to learn lessons and change their methods, determined to avoid a repeat.
Fast forward to the 2017 general election and you’re in a quickly shifting environment. The polls aren’t static as in 2015. They’re different every day. At the start of the campaign they showed a Tory lead north of 20 points. By the end of May, with the election to be held on 8 June, the Tories’ lead was more like five to seven points, with a Survation poll even putting the Tories ahead by a mere two.
Some in the industry were sceptical. “When something this dramatic happens, you struggle to believe it,” said Curtis. The idea of Theresa May losing the Conservatives’ majority defied long-standing narratives. Survation’s two-point Tory lead was met with laughter on the BBC’s Daily Politics.
A pollster’s reluctance to publish polls that contradict the campaign narrative may be understandable but it’s also undesirable. Standing outside the crowd is riskier to your reputation than following it, and pollsters were haunted by memories of 2015, but this is polling. It is a science and it helps to have a bit of noise. You are presented with the data and, if you have faith in the methodology, I’d urge you to trust it. Outliers can be wrong – such is the nature of a science – but they are a healthy way to gauge how fluid, or not, public opinion is. They can be pivotal to detecting potential shifts in support and they need to be published.
Exhibit A. In 2015 Survation refused to publish its final poll before election day because it put the Tories six points ahead of Labour. It was “so out of line” with other polls at the time, wrote Damian Lyons Lowe, chief executive of Survation, but not, it proved, with the actual result. “I chickened out of publishing the figures – something I’m sure I’ll always regret.”
More recently, Redfield and Wilton, the new pollsters in town, couldn’t do much when their weekly survey following the local elections showed a dramatic shift against Labour – a shift entirely out of line with how Britons had just voted. Successive surveys by them exposed what we all dismissed as an outlier as, well, an outlier. And that was that. Outliers happen and they are no bad thing.
This fear of outliers needs to end. Pollsters need to trust the data they see and allow the noise in that data to be seen, heard and covered. Varying methodologies, too, are healthy. Opinium’s recent decision to change how it defines undecided Tory voters is a credible methodological change and one that helps us to better understand how real Tory apathy is, and I am glad one pollster, at least, is doing something different. What it means is that Labour’s lead at the time of writing is likely to be between four points and eight points. That variation is healthy and trackers such as mine can, hopefully, provide a more credible picture of public opinion.
Pollsters copying one another – or “herding” in industry speak – does nothing to enhance our understanding of where British public opinion truly stands. I have loyally reported on British pollsters for the last decade. I respect their work and enjoy excellent working relationships with many who work in the industry. But I would say this to them: the heat is on you and those of us that utilise your data. Don’t engage in last-minute methodological changes out of anxiety. Don’t be afraid of outliers. Trust your data and damn well publish it.