Pollster who called the EU referendum right: No late Leave swing after all

How did TNS do it? Head of opinion polling explains to El Reg

London, United Kingdom - June 23, 2016: British Referendum. A voting station in inner London is the grand entrance to St Matthew's Church. UK is voting to stay or leave the EU. pHOTO Ms Jane Campbell/SHUTTERSTOCK - EDITORIAL USE ONLY
A London polling station. Pic: Shutterstock

Analysis The EU referendum was a catastrophe for opinion pollsters. Remain’s official pollster, Populus, predicted a ten point margin of victory.

Banks commissioned their own private polls, and also predicted victory, only to lose billions within hours as the strength of the Leave vote became clear.

However one pollster stands out – it called the result right. The day before the result TNS BRMB showed that 43 per cent would vote to Leave, 41 per cent Remain, and 16 per cent were undecided. How did TNS get it right when everyone else got it wrong?

Given that the UK and Polish General Elections last year also showed a more conservative result than most pollsters predicted, could we learn something useful from their methods?

Intrigued, we invited Luke Taylor, senior associate director of the Methods and Statistics Team at TNS, and the head of opinion polling, to explain why. He wrote this explanation for us.

Luke Taylor writes…

It’s very easy to criticise opinion polling, but accurate opinion polling is incredibly hard to do for a number of reasons.

Pollsters are only interviewing a sample of people rather than the whole population and this means that there are margins of error around estimates. Polling is a snapshot of opinion and voting intention among the general public can shift very rapidly. Many people do not make up their mind fully until the day of the referendum/election, so pollsters aim to complete their final polls as late as possible in order to capture late swing.

We also need to get the results out fast. The highest quality government surveys have fieldwork periods of three months or more, but pollsters don’t have that luxury. So we therefore have to rely on modelling to make our samples representative. Lastly, polls rely on people accurately telling us about their future behaviour. This is something that people are not always very good at doing, and a number change their minds even after saying that they will definitely vote in a particular direction.

The final TNS poll (published on 22nd June) was one of the only polls to show Leave ahead in the week leading up to the referendum; it showed that among registered voters 43 per cent favoured Leave, 41 per cent Remain and 16 per cent were undecided. It is hard to pinpoint any particular reason why these results ended up being more accurate than others, because the approach used by each pollster varies markedly in terms of questionnaire design, weighting of data, modelling of turnout and mode of interview. There are therefore different similarities and differences between our approach and that used by each of the other pollsters.

Nevertheless, when looking back at the approach which TNS used for the EU referendum, we believe that some of the aspects outlined below were key in helping us to get as accurate a measure as possible.

Opinion poll samples tend to be too politically engaged and this can mean that estimates produced from polls are inaccurate. This has been highlighted by the British Election Study team at Manchester University. We asked respondents about their likelihood of voting in the next General Election and used this (along with some demographic information) to model turnout using data we collected at the 2015 General Election. We compensated for this imbalance by weighting the turnout level of our sample down to a more realistic level; decreasing the number of politically engaged individuals and giving us a more representative sample.

Education was also identified as a crucial factor in how people planned to vote in the EU referendum; those with a degree were much more likely to be Remain supporters. Our unweighted sample profile tended to have too many individuals with a degree, and it was therefore critical to weight the sample by this component in order to get an accurate measure and to not over-estimate the support for Remain.

We also found that social grade was a crucial factor in how people intended to vote in the referendum, with higher social grades (ABC1s) more likely to vote Remain. It was therefore important to ensure that our opinion polls had the correct balance of lower social grades in order to correctly gauge the support for Leave.

It should be noted that our final poll was conducted between 16 and 22 June, so there was a possibility that there would be a swing in opinion subsequent to this which we would not capture. There was also added risk due to the fact that 16 per cent of registered voters were still undecided and we were unsure as to whether they would vote, and if so how. In the Scottish Independence Referendum and the Quebec Independence Referendum individuals swung towards the status quo in the late stages of the campaigns, and there was a feeling that this may also be the case for this referendum with a late move towards Remain.

Our understanding is that some polling companies tried to take this into account by allocating a higher proportion of the undecided voters to Remain; this also seems to have been factored in by the betting market as it consistently showed a higher probability of Remain than the opinion polls. In our case, we had attempted to impute the choice which the Undecided voters may make (based on a number of other questions included in our polls) but we found no evidence that they were leaning more in one direction than the other, and we therefore did not feel confident in predicting how these individuals would end up voting – we caveated this in our final press release and highlighted that there may well be a very late swing towards Remain.

Knowing that late swing and how the undecided made up their minds would have a critical impact on the outcome of the referendum we organised a qualitative Voter Panel. This comprised of a broad mix of 200 individuals who intended to vote in the referendum, who told us their intention roughly a week before the vote and who then told us how they actually voted on the day of the referendum itself. Whilst this qualitative research cannot be deemed fully representative of all voters in the UK, it suggested that there was no late swing in one direction or the other, and that the undecided vote ended up splitting largely evenly between the two camps.

It is, therefore, very important to say that the accuracy of our final poll (which rounded to 52 per cent Leave and 48 per cent Remain once the Undecided were excluded) was not just due to robust survey design, but also due to the fact that for this particular referendum the rough balance of opinion among the electorate did not change after we conducted our research.

It therefore needs to be caveated that there is no guarantee that this will also be the case in the next General Election! ®

Thanks to Ogilvy's Rory Sutherland for the tip – Andrew

TNS' full data tables are available here (PDF).

Sponsored: Becoming a Pragmatic Security Leader




Biting the hand that feeds IT © 1998–2019