About EKOS Politics

We launched this website in order to showcase our election research, and our suite of polling technologies including Probit and IVR. We will be updating this site frequently with new polls, analysis and insight into Canadian politics. EKOS's experience, knowledge and sophisticated research designs have contributed positively to many previous elections.

Other EKOS Products

In addition to current political analysis, EKOS also makes available to the public general research of interest, including research in evaluation, general public domain research, as well as a full history of EKOS press releases.

Media Inquires

For media inquires, please contact: Frank Graves President EKOS Research Associates t: 613.235-7215 [email protected]

Pick Up the Phone – Pollsters Know What They’re Doing

By Frank Graves

[Ottawa – May 2, 2017] The election of Donald Trump last November shocked most observers of U.S. politics. The major consensus predictions favoured a Clinton victory by an overwhelming margin. Heading into election night, The New York Times pegged the odds of a Clinton victory at 85%. Others published similarly high odds for a Clinton win, including: FiveThirtyEight.com (71% chance), Huffington Post (98% chance), PredictWise (89% chance), the Princeton Election Consortium (99% chance), and Daily Kos (92% chance).

In hindsight, these predictions, in their overwhelming certainty, seemed more like science fiction than scientific probabilities. How could the overwhelming consensus drawn from so many have gotten it so wrong? Coming from the same species that once thought the earth was flat and that parachute pants were stylish, we really shouldn’t be so surprised. But more importantly: WHO DO WE BLAME?

“You’d think the leading voices in the media and among pollsters would have learned their lesson after being so off the mark in their coverage of Donald Trump,” opines Anthony Furey in a recent column in the Toronto Sun.

Hold on a tic, there, Tony!

There’s a pervasive misconception that the predictions cited above are “polling” and, as such, that the polling got it wrong. As I have noted before, the problem wasn’t the polling – it was the predictions. The polls were, in fact, reasonably accurate. The last poll published before the election, from the UPI, showed the national result to be 48.8% for Hillary Clinton and 46.2% for Donald Trump. The actual election results, representing the nationwide popular vote, were 48.1% for Hillary Clinton and 46.0% for Donald Trump.

With a few exceptions, the other polls published on the eve of the election had similar results, coming reasonably close to the election result (give or take a percentage point or two).

All this brings us to a recent editorial from Erin Kelly titled, “Hang up the phone, pollsters.” In it, Kelly admonishes governments on the inaccuracy of polling, while offering a self-congratulatory infomercial on the virtues of conducting research via social media using artificial intelligence.

Before hanging up the phone, let’s examine the hard evidence a little more carefully. Kelly makes a number of claims about the uncanny accuracy of the methods she and her colleagues apply. Seeking to verify these claims, I went to the website of Kelly’s company, Advanced Symbolics, to review the unblemished record of electoral predictions described in her editorial. I was somewhat frustrated to find no such disclosure.

Failing to see any clear record of electoral predictions on the Website, I listened to an interview Kelly gave to the CBC on October 27, 2016. In it, Kelly describes the AI she uses, named “Polly,” as a fully sentient being who’s developed a remarkable talent for predicting elections. Along the way, she describes how her approach, using artificial intelligence, is superior to traditional polling in several ways. After five minutes and 25 seconds of this, the interview turned to the prediction. Who did Polly think would win the U.S. election?

Hillary Clinton.

Oops!

The interview took place on October 27, 10 days prior to the election and one day before FBI director James Comey sent a letter to Congress saying his bureau would re-open the investigation into Hillary Clinton’s emails. In light of this, it’s entirely possible that Polly – as a learning, sentient and, presumably, observant creature – could have absorbed this new information and arrived at a different prediction. If so, it doesn’t appear to have been published. Yet, in a piece published in the Globe and Mail on November 15th, “How our company called Trump’s climb,” Kelly suggests that her AI knew it would be Trump all along (or at least since the Republican National Convention in August)!

If getting the wrong answer makes you human, then Polly passes the Turing Test with flying colours. But getting the wrong answer, and then claiming you got it right, while blaming the pollsters for getting it wrong? That’s delusional, at best; disingenuous, at worst.

So, how did Polly do in the French election? Would the sentient creature’s untarnished (sic) record of prediction accuracy continue? Alas I could find no predictions but the French polls basically nailed the election. Meanwhile, an article in Le Monde pointed out that while the traditional polling got it right, the “alternative” forecasts got it wrong.

In the last seven federal elections, EKOS has correctly predicted the winner every time. We make these predictions in advance and then return to them after the results to do a post mortem. It’s not hard to find our track record either. You don’t need to listen to an interview or search the Internet – fruitlessly – to see where it can be found. We’re proud of our record of accuracy, and open about where we’ve fallen short. You can find it on our website.

Ultimately, the best judge of polling quality isn’t the accuracy of its election polling. Getting an election result right to within tenths of a percentage point doesn’t make one polling organization better than another. Clearly understanding the social and economic currents that underlie voting intentions is what pollsters do best of all.

Putting aside the dubious claim of superior prediction accuracy to those of us working in polling for governments (full disclosure: we are a leading supplier), how well one predicts elections has virtually nothing to do with the needs of governments seeking scientifically accurate polling data to support evidence-based policy and communications. If it were simply about prediction perhaps we could turn over polling to Michael Moore or even the Simpsons, who nailed the Trump victory.

Governments need to know what all members of the citizenry think, not just those who are active on social media. They also need to ask specific questions about topics that few, if anyone, is discussing on social media. It’s impossible to measure an issue through the digital exhaust of social media if the issue isn’t something people are talking about. Without doubt, there are an impressive and growing range of important things that social media, big data, and AI can do and say about our world. But providing scientifically accurate data on key questions the government needs to guide policy and communications isn’t yet one of them.

Please click here for a PDF copy of this report.

Leave a Reply