Polling Observatory #19: British polling after the conference…and a look across the pond

This is the nineteenth in a series of posts that report on the state of the parties as measured by opinion polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. If there can ever be a definitive assessment of the parties’ standings, this is it. Further details of the method we use to build our estimates of public opinion can be found here.

The conference season is now over for another year, and our latest polling estimate gives us a chance to gauge whether any of the main parties enjoyed a boost. The answer is a clear “no”: our estimates suggest that each party’s support was static through the conference season. Labour ended October on 41.9%, up 0.4% on their pre-conference position, the Conservatives continue to trail on 31.4%, down 0.3% while the Liberal Democrats fall slightly, down 0.6% to 7.9%. The overall political landscape remains much as it has since the Conservatives’ post budget “omnishambles” collapse in the spring. Nothing which has happened since has altered voters’ views, which is ominous for the Conservatives as it suggests the opinions of those who deserted the party earlier this year may have hardened against the government.

We have now introduced a fourth party, the UK Independence Party, to our estimates. UKIP have advanced steadily in the polls over this election cycle, although pollsters vary widely in their estimates of the party’s support, with internet pollsters tending to give them stronger results. The reasons for this are not entirely clear. Our model takes the view that the pollsters’ performance in the 2010 are the best guide for current estimates, and ends up splitting the difference to some extent, assuming that some of the strongest UKIP pollsters are over-estimating the party, while some of the weakest are under-estimating it. Our overall estimate suggests a slow but steady advance for the Eurosceptics, from a low of about 1.5% in the summer of 2010 to 7.6% in our current estimates. There are few bumps in the shallow upward trend, except for a sharp uptick around the time of “omnishambles”, when UKIP support jumped from 5% to 7.5%. This would suggest that much of UKIP’s support is coming from disgruntled Conservatives, a view backed up by other work.

So this conference season, like the last, was a disappointment for parties and pundits, as despite gallons of ink spilt over speeches and strategies, the electorate was unmoved. The other big story of the past month was, of course, the Presidential election in the United States, where polling analysts found themselves unexpectedly caught in the partisan crossfire. After a strong first debate performance, the Republican candidate Mitt Romney, who had been trailing, closed much of the gap on President Barack Obama. Over the final three weeks of the campaign, many pundits (particularly on the right) declared that Romney had “momentum” and – based on “savvy”, “gut feeling” or “inside information” declared that he was overtaking the President and would win come election day. Polling analysts such as Simon Jackman at Huffington Post-pollster.com, Sam Wang of the Princeton Election Consortium and most prominently Nate Silver of the New York Times’ fivethirtyeight blog poured scorn on this theory, pointing out that the polls had barely budged after their initial first debate shift and consistently pointed to a narrow victory for the President, eked out through strong performance in the crucial swing states. This lead to an extraordinary barrage of vehement, ill-informed attacks from journalists and commentators who felt that such polling analysis was wrong-headed, partisan, or no substitute for journalistic wiles. A few Brits even decided to join in, such as cultural historian Dr Tim Stanley, a man with no experience in polling analysis, who nonetheless felt amply qualified to dismiss polls and analysis which he deemed had devolved “from an objective gauge of the public mood to a propaganda tool: partisan and inaccurate”.

The unusual thing about this dispute is how easy it was to settle: the polls were either going to be right or not come election day. Tuesday’s election returns can therefore be declared a resounding victory for polls and polling analysts – who called every state successfully – and a resounding defeat for “gut feeling”, “insider information” and Dr Tim Stanley. We hope that Dr Stanley will at least consider a course in elementary statistics before venturing into polling commentary again.

We can draw a few lessons from this little controversy for British politics and polling. Firstly, poll averaging can be a very powerful tool, and an important counter-weight to journalistic narratives which are often constructed based on very little solid evidence. Many media commentators were convinced Mitt Romney had momentum, but the polling clearly said he did not. The polling was correct. Secondly, many journalists and commentators have a very sketchy understanding of polling and statistics in general, and regard it with suspicion, particularly when it doesn’t agree with their partisan or professional biases. Journalists wanted a fight to the finish, and Republican partisans wanted a Romney victory, so both groups dismissed evidence which did not agree with these preconceptions. Thirdly, thanks to the rise of the internet, polling data is freely and easily available to all and so interested and numerate citizens no longer have to accept the campaign narrative constructed by the media commentariat. They can download the data, draw their own conclusions, and write these up for the world to see. Several young Americans made a name for themselves doing just that, including The New Republic’s Nate Cohn and the Guardian’s Harry Enten.

So the 2012 US election was a “triumph of the nerds”. This is encouraging for the Polling Observatory team, as we look to apply similar methods of polling aggregation and analysis to clarify the political picture on this side of the Atlantic. Indeed, we tried our hands at forecasting in Britain ahead of the 2010 election, producing a seat by seat prediction model which performed pretty well in the end,  getting the Conservative seat total exactly right and substantially outperforming a model constructed by Nate Silver, with whom we had an entertaining “nerdfight“. It is too early in the election cycle to begin producing forecasts for the 2015 election, but we feel that our polling analysis – and others such as the excellent UK Polling Report blog authored by Anthony Wells – still serves a valuable purpose, helping to separate the genuine shifts in the public mood from the random bumps and bounces produced by sampling error. As the election approaches, though, we will dust off our old forecasting model, spruce it up and put it to work figuring out how the next Parliament is likely to look. Watch this space.

Robert FordWill Jennings and Mark Pickup

Leave a Reply

Your email address will not be published. Required fields are marked *