ACCURATE POLLING, FLAWED FORECAST - June 17, 2011
AN EMPIRICAL RETROSPECTIVE ON ELECTION 41
By Frank Graves
Introduction: The Nature and Purpose of this Test
“Mistakes are the portals of discovery.” -James Joyce
As the dust settles on what was an extraordinary 41st Canadian election campaign, it may be worthwhile taking a more careful look back at the polls. While focussing on our own research, our observations are intended to have more general relevance to the debate about the role of polling in the democratic process. In fact, the research has important lessons on the shifting nature of our society which has important implications for the role of polling beyond the narrow yardstick of how well a final poll of voter intention resembles the final election result.
This exercise is not an attempt at apology or rationalisation. Our final election poll was over five points below the final results for the Conservative Party and our forecast of a Conservative minority was a mistake. The Conservative Party did considerably better than our final poll and went on to win a majority government. At the time, we committed to doing some hard thinking and testing to find out what the source(s) of the gap between our final polls and the election outcome was.
The ensuing exercise has provided some very interesting and surprising insights on why our final poll differed from the actual outcome. In the process of formally testing these questions, we uncovered some serendipitous findings which speak to some of the broader challenges confronting modern polling. We also found some important discoveries about the shifting nature of democracy and political participation which may have more important implications than merely assessing which poll came closest to the final outcome. The research shows a growing tension between the challenge of understanding all voting members of society and the more specific challenge of forecasting the outcome of the election.
The general view of the polls is that they performed rather poorly and no pollsters accurately and clearly predicted the final Conservative majority. This failure of forecast (a first for us), and the generally inauspicious connections between final polling and the actual election has been interpreted by some as further evidence that polls no longer “work”. Others have speculated that the gap may well reflect the fact that voters altered their final choices as a strategic response to the polls themselves. Still others have noted that the difference between the poll and the final outcome may reflect the differences in the roughly 60 per cent who came out to vote and the broader population of all voters.
The objectives of our research are to test these three separate hypotheses:
- Our polls were flawed either due to systematic sampling error or measurement error. In particular, did EKOS polling systematically understate Conservative support?
- The second hypothesis was that there were final movements which occurred basically in the ballot booth (we polled until Sunday) and that these late shifts accounted for the final discrepancy. More specifically, the speculation was that enough of the residual Liberal voter support abandoned the Liberal Party and shifted to the Conservatives in order to forestall the chances of a NDP–led coalition government. Embedded in this hypothesis is also the notion that the polls themselves had a causal influence on the final shape of the election (as the knowledge of a possible NDP–led government was a surprise that would only have been evident through reading the polls). Notably, EKOS was the first and most consistent source of information on this NDP surge.
- The final hypothesis was that the differences in the final polling and the election outcome were due to differential voter turnout between all eligible voters and the actual subpopulation of actual voters. More specifically, the idea here is that the Conservatives were much more successful in getting out their voters than other parties.
Of course, it is possible that the gap could be a product of a mixture of all three of these hypotheses. If the first hypothesis of survey bias is true, it has damning implications for our confidence in modern survey research. In our case, we took special care to model the entire population (including non-internet and cell only households). We also incorporated random sampling methods with careful call-backs, replacement, and weighting. If our methods were no longer capable of accurately modelling overall populations, this would be a severe blow to our credibility. Worse, it seems that there was no apparent advantage to having used comprehensive and random sampling as some of our competitors who had utilised non-randomly recruited panels or failed to bother with the growing number of cell only households had indeed been slightly closer to the final outcome. Our evidence leaves us comforted that it is still better to randomly and comprehensively sample but in so doing, we may have had a larger gap because we had a more accurate sample of the entire voting population which was more different from the final vote than the partial portions others were sampling.
If the second hypothesis of an eleventh hour shift is true, it poses at least two major challenges: first, a severe methodological challenge occurs in a world where the act of observing, recording, and reporting on public opinion measurably alters public opinion (and voting behaviour). The pollster becomes a coagent in the subject matter in ways that could make Heisenberg’s uncertainty principle seem minuscule. It is already very difficult to accurately measure and model human attitudes behaviour. Does it become intractable when the reporting of polling is leading to and altering the very matter it seeks to record?
The second issue is one of ethics. If the reporting of polling is altering political outcomes, it that a desirable thing? Should the public be forced to make “purer” democratic choices in state of relative cerebral hygiene (when it comes to polling data) or should, for example, residual Liberal voters be entitled to shift allegiances to the Conservatives because polls suggest that to do otherwise might produce an NDP-led coalition government of which they disapprove?
These are very difficult questions and our contribution here is intended to be unremittingly grounded in empirical evidence and logic. In constructing one last survey before putting away our federal vote intention polling tools for some time, we constructed a series of critically falsifiable tests designed to help us understand what was wrong and what was right about our polling. Failure and learning are ingredients of progress but only if the opportunities are clearly seized.
Click here for the full report: accurate_polling_flawed_forecast