The 2020 election seems to be nearing the end of the vote-counting phase, with the final ballots in Nevada, Arizona, and Pennsylvania being tallied as I write. But one clear loser is already evident: the preelection polls. Public opinion estimates that showed Joe Biden with a commanding lead, even in the swing and some pick-up states, appear to have been substantially off the mark. Because opinion research sometimes plays a role in expert witness testimony, and often is used in case assessment and pretrial jury research, it helps to think about how they failed.
In the “morning after” post in the publication Inside Elections, Nathan Gonzales writes, “No matter who wins the presidential race, it’s clear that the vast majority of the polling underestimated Trump’s support once again. For months, we’ve been saying that it would take dozens of pollsters, partisan and nonpartisan, independently making the same methodological mistake for the outcome to be different than what we were projecting. And that is apparently what happened.” The polling expected Trump to substantially underperform his 2016 margins by as much as eight points, some polls even going into the double digits, but the actual margin seems to have been much narrower. While many correctly predicted the race would come down to Wisconsin, Michigan and Pennsylvania, the polling, as it did in 2016, underestimated the amount of Trump support. Despite being a historically unpopular President (polls, again), Trump actually received more votes in 2020 than in 2016. In this post, I’ll look at a few reasons the polls were off and what that says to trial lawyers as students of public opinion.
Measuring Attitudes Does Not Predict Behavior
Public opinion polling measures attitudes. At best, they measure expected behavior during a snapshot in time. They do not measure future behavior. It is quite possible that a peaking pandemic, as well as widespread uncertainty over the mail, influenced the choice of some whether to participate or not. While record turnout argues against widespread suppression of the vote, we do not know at this point whether turnout could have been even greater.
Inevitably, social scientists will be looking closely at the reasons why the polls don’t seem to have matched the results during this election. But one lesson is that a prediction of future behavior always contains some uncertainties. That is one of the reasons why pretrial jury research, like attitude surveys, focus groups, and mock trials, can be useful tools for assessing and preparing a case, but are not as useful as tools for predicting the outcome.
Polls Cannot Always Account for Special Factors
In his piece, Nathan Gonzales notes, “A key question moving forward is whether public opinion polling is irreparably broken or if polling is just broken in elections with Trump on the ballot.” He notes that models for 2018 corrected the problems of 2016 (not enough working class whites in the sample) and then did relatively well in that election. So it is understandable that pollsters were more confident that they were correct in 2020. So how could that have been disrupted by Trump’s presence on the ballot? Researchers have a couple of theories. One is the “Shy Trump Voter” theory that posits, based on a social desirability bias, that people who fear that they’ll be thought of as racist or sexist, are simply less likely to report a true preference for Trump to a pollster, but will privately vote that way on their ballot. Some believe that the Rasmussen poll does better at mitigating this bias because it uses a recorded voice, rather than a human interviewer. Another theory is that Trump supporters are antielite and don’t trust the media, so they may misreport their actual leanings just to tweak their nose at the system.
The lesson here is that, even as the attitude survey can help you close in on a relevant attitude, they aim at general tendencies, and cannot always capture what is unique about a given subject, situation, or location.
You’re Only as Good as Your Sample
Salvatore Babones, an American political sociologist at the University of Sydney, notes one more problem with the polls, and one that is likely to get worse. In comments to the Sydney Morning Herald, he says, “It’s the dirty little secret the pollsters don’t want you to know: Virtually no one is answering their questions.” People don’t answer phone calls from unknown numbers anymore. He reports that the Pew Research Center now says that its response rates have plummeted from 36 percent 20 years ago to just 6 percent now. The word is that, for others in the industry, the rate is closer to three percent.
Pollsters will, of course, do their best with what they get, and apply weighting and stratifying to their samples to approximate the population. But with response rates that low, it is inevitable that they are missing something. The recent rash of less accurate political polls may be a sign of just how much we are missing. At our own company, Persuasion Strategies, we have historically relied on the same methods for surveys and mock trial and focus group recruiting: random digit dialing by a market research company. Recently, however, we have looked at other sources like Amazon’s Mechanical Turk. In the past, those sources would have been verboten, because they are essentially ‘opt-in’ panels of survey respondents. But we have seen that in some ways, those tools can be applied with a high degree of systematic and randomized selection, and they now fare better on representativeness than purely random recruiting, due in part to the problems in the response rate.
In the weeks ahead, the 2020 vote count will be complete and we will have a better picture of the degree of the polls’ deviation from reality, and the data scientists will be taking a closer look at the reasons. In the meantime, it will help the rest of us to remember that social science isn’t the study of physics or chemistry, it is the study of humans. And part of the charm of humans is that we’re not fully predictable.
Image credit: 123rf.com, used under license