• Mar 01, 2023
  • Insights

Solving the problems with polling

Written by Allan Gregg. Published in Policy Magazine.

Among the issues that have dominated the political and policy discourse over the past decade, the state of post-internet public opinion polling has been among the most contentious. Given the misinformation and propaganda fogging the political landscape, are polls still an accurate reflection of reality? We went to one of the best, long-time pollsters in the business, Allan Gregg, who also happens to be widely recognized as an inveterate straight shooter, for answers.

YouTube video

After decades spent in politics, public policy, media and public opinion research, I can think of few professionals whose job has become more difficult or whose output more questionable today than it was 10 years ago than pollsters.

The difficulty of the job and the questions raised about output in recent years are not a function of a deterioration in the quality of the pollsters or their understanding of public opinion. To the contrary, the nature of the profession is a constant continuum of learning and understanding of the never-ending complexity and nuances of citizen and consumer thinking and behaviour.

Simply put, our job has become more difficult because the foundation of our discipline has been crumbling over the past two decades, largely due to the disruptive impact of the same technology that has transformed other fields of endeavour, including journalism and politics.

For the opinions of a sample of voters to be projectable back to the population from which the sample was drawn, the survey respondents have to be selected on a random basis. In other words, everyone in the “universe” of voters has to have an equal “chance” of being selected and of participating in the survey. Random probability, in turn, is a well-established principle of not only polling but physics.

We all seem to know intuitively that if we flip a coin that we will have a 50 percent chance of getting a head and 50 percent chance of getting a tail. But in fact, this knowledge is not intuitive at all but scientific, because if we flipped a coin until infinity, we would get exactly 50 percent heads and 50 percent tails. But because of random probability theory, we also know that to determine the “chance” of getting a head or tail, it is not necessary to flip the coin forever. We can just flip it 100 times and we will probably get between 45 and 55 heads and 45 to 55 tails, 19 times out of 20. If you don’t believe me, try it.

The same theory applies to survey research. If, for example, single women in apartment buildings represent 12 percent of the total voting population, then if we select our sample randomly, they will have a 12 percent “chance” of being selected and after we randomly select our sample, it will contain 12 percent single women who live in apartment buildings.

When we started in the profession, everyone had a landline and response rates were in the 70-80 percent range. While not perfect, virtually everyone had an equal “chance” to be selected for a survey and, as a consequence, the samples we generated reflected the population from which they were drawn. While we still checked to make sure there was a sample “fit” with the general population, weighting and rejigging the sample was rarely necessary.

Today, we no longer have a universal distribution system that can reach the total population and therefore not everyone has an equal “chance” of being selected for a sample.

Our job has become more difficult because the foundation of our discipline has been crumbling over the past two decades.

So, some pollsters still conduct landline surveys but they tend to skew older, more rural and less educated and do not include the nearly 46 percent of the Canadian households who no longer have a landline. Some conduct online surveys with respondents drawn from “opt-in” panels but they tend to skew younger, more urban and better educated and do not include those without access to the internet or who choose not to “opt-in” to one of these panels. Some conduct surveys using mixed methods that include a combination of landline, cell phone and internet, on the assumption that the skewedness of each method will cancel one another out, and then weigh the sample to make sure it reflects the fundamental regional, socio-economic and demographic characteristics of the total population. And then there are the interactive voice response (IVR) surveys that bombard telephone numbers with recorded questions which, quite frankly, isn’t any more scientific than gathering public opinion by trying to stop people randomly to ask them questions on a street corner. In all instances, response rates today – that is, the number of people selected who actually complete the survey – tend to be in the single digit or low teen range, raising the even larger prospect that those who actually choose to participate in surveys have an obviously different “propensity” — or mindset — than those who do not.

Given the hodgepodge of these less-than-perfect methodologies, it’s no wonder that many question the accuracy of polls today. Indeed, for some, the question is not “Why are so many polls wrong?” but “How can any of them be right?”

For all our difficulties, both questions, by and large, are misplaced.

First, the disparity between those without landlines, cell phones or internet has actually shrunk over the past 10 years, giving the pollster — once again — nearly universal access to the total population.

Secondly, none of this is a secret within the survey research community and there have been many brilliant minds that have been hard at work trying to find solutions to these challenges.

Indeed, ESOMAR (founded in 1947 as the Amsterdam-based European Society for Opinion and Market Research), now the global member organization that represents market and opinion research, recently conducted a massive audit of international polling results compared to actual election outcomes. Commenting on ESOMAR’s findings, Canada’s own Research Insights Council (CRIC) provided the following summary:

“By following the highest standards and best practices, CRIC members have been consistently accurate in predicting elections. In fact, a global analysis (ESOMAR) into the accuracy of surveys concluded that done well, surveys overwhelmingly continue to correctly predict election outcomes. The study looked at more than 31,000 surveys from 473 voting events across 40 countries spanning 1936 – 2017, and found that at a global level, the average error of surveys conducted within seven days before an election is +/-2.5%. The ESOMAR study included surveys from the four Canadian federal elections prior to 2019 and found that Canadian survey researchers performed well with average errors below the global average. CRIC members also correctly predicted the results of the 2019 and 2021 federal elections that took place after the ESOMAR study.”

COVID-19 moved qualitative research and focus groups out of a physical and into a virtual setting and, much to our surprise, the quality of our recruitment and the actual discussions that take place in these sessions has improved, while our costs of doing this kind of research have gone down. In fact, we doubt that we will ever go back to in-person, physical focus groups.

And finally, while technological disruption and changing patterns of consumer behaviour have made it more difficult to generate random probability samples, the internet has facilitated infinitely more creative ways to gather public opinion compared to old-school telephone or in-person interviewing.

Because online interviews are self-administered – rather than through a conversation with a live interviewer – we have increased the accuracy of some areas of investigation that might elicit an otherwise “socially desirable” response. Respondents now routinely confess to “bad behaviours” like drinking and driving or self-identify as minorities whereas in the past, this was virtually never the case.

Being able to embed visuals — be they text, graphic, photos or videos — not only enhances the interview experience for the respondent, it also allows us to know more fully if the responses we are getting actually align with the question we are asking. Absent these visuals, in research designed to test advertising effectiveness for example, we had to ask questions like “Have you seen, read or heard any advertising that discussed climate change?”, and  “What was the main message of that advertising?”, and, “Who sponsored or paid for that ad”? The responses might be “yes”; “that the risks of global warming are being exaggerated” and “Shell Oil”. Well, Shell Oil never ran any such advertising, so we were completely unable to determine what part of the answer was inaccurate, be it whether the person actually ever saw an ad, whether they got the message wrong or whether they misidentified the sponsor. Today, we may ask the same questions but now we are able to show the respondent the actual ad embedded in the on-line survey and know if they are getting the message and sponsor right.

COVID-19 moved qualitative research and focus groups out of a physical and into a virtual setting and, much to our surprise, the quality of our recruitment and the actual discussions that take place in these sessions has improved, while our costs of doing this kind of research have gone down. In fact, we doubt that we will ever go back to in-person, physical focus groups.

Social media has also allowed us to “target” sub-populations with a precision never thought possible before. Back in the day, if we could identify and recruit 18-34 year-olds for a survey, we thought we were pretty sophisticated. Today, we can identify, target and recruit mothers who take their children to ballet classes (to use but one example) for a fraction of what it would cost to conduct the “youth survey” of the past, using random probability sampling.

Finally, whole new methodologies have evolved as a result of precision targeting and integrating visuals into a conversation. We now routinely conduct “on-line communities” where we recruit participants through very specific social media sites (the mothers with children in ballet classes, to continue the example) and then host a moderated discussion on a Facebook-like platform over the course of three to four days. This form of iterative dialogue allows conversations to take a course and generate new, real-time hypotheses that the researcher may not have anticipated or thought of when the research was initially designed. The platform also accommodates the injection and testing of new messages, concepts and graphics that have been developed based on the conversations or ideas that have been generated by the conversation itself. In this way, the tools not only give us insight into what public opinion is, or what underlying sentiments might be driving public opinion, but also into the factors that are most likely to change public opinion. Given this power, the communities are tantamount to focus groups on steroids.

So yes, being a pollster isn’t as easy as flipping a coin anymore. And yes, we are forced to do a number of work-arounds to accommodate the difficulty of reaching a representative sample due to changing patterns of information consumption. But it is still a remarkably exciting and stimulating discipline that challenges the mind.

I always tell young people entering the profession that to be a great researcher, they have to learn to love being wrong. This usually produces a perplexed look and the obvious question: “Why?” Polling may have changed over the years, but the answer hasn’t. “Because when you’re wrong, you have to generate a new hypothesis. And then you get to test it!” This is, after all, a science.

Related articles