Skip links

Poll-Based Election Forecasts Will Always Struggle With Uncertainty

KEY POINTS FROM THIS ARTICLE

— We have no idea how much uncertainty exists in pre-election polls. That leads to very complex poll-based forecast models.

— The general public lacks the expertise necessary to appropriately interpret win probabilities from those complex models.

— Some research shows that overly certain forecasts might depress turnout, which could matter in close races.

Introduction

Humans generally do not like uncertainty. We like to think we can predict the future. That is why it is tempting to boil elections down to a simple set of numbers: The probability that Donald Trump or Joe Biden will win the election. Polls are a readily available, plentiful data source, and because we know that poll numbers correlate strongly with election outcomes as the election nears, it is enticing to use polls to create a model that estimates those probabilities.

After my forecast, which gave Hillary Clinton a 98% chance of winning on Election Day, collided with the fact of Trump’s victory, I spent a lot of time considering what went wrong and the impact of my work. I have concluded that marketing probabilistic poll-based forecasts to the general public is at best a disservice to the audience, and at worst could impact voter turnout and outcomes.

This conclusion is based on two interconnected issues. The first is that we do not really know how to measure all the different sources of uncertainty in any given poll. That’s particularly true of election polls that are trying to survey a population — the voters in a future election — that does not yet exist. Moreover, the sources of uncertainty shift with changes in polling methods.

That leads directly to the second problem. The challenge of developing models that account for both known and unknown uncertainty is catnip for political data analysts. There’s nothing wrong with trying to solve the puzzle. The problem is marketing those attempts to solve it to the general public in the form of a seemingly simple probability or odds statement that the public lacks the tools and context to appropriately interpret.

Uncertainty in polls

In short, polling error is generally larger than the reported margin of error.

When we think of uncertainty in polls, the first thing that comes to mind is the margin of error (MOE). It should really be labeled the “margin of sampling error,” because that is the only source of potential uncertainty, or error, that it measures. And even then, it measures a theoretical relationship between a truly random sample — in which the entire population had a chance to be chosen — and the full population. Reality is much more complex. In the past, when nearly everyone had home telephones, we could account for random sampling error relatively easily with the MOE. Once pollsters applied weights to account for people who did not answer, “design effect” for the weights could be added to the margin of error, and we still had a reasonable error estimate.

That ideal framework has not applied since at least the mid-2000s, when many started ditching landline home telephones for mobile phones. Then people stopped answering any type of phone, which does result in some biases, even if the polls are still mostly representative of the U.S. population.

The change in telephones and availability leads to a very difficult to measure source of uncertainty: coverage error, which refers to people who simply cannot be reached using that polling method. In phone polls, coverage is an issue because cell phones are quite expensive to dial because of Federal Communications Commission guidelines that preclude the use of automatic dialers (or robocalls) to cell phones, but not calling enough cell phones means you do not reach all of the 78% of the population that relies mostly or completely on cell phones. The highest-quality phone polls do enough cell phone calling. Many do not.

Pollsters have also developed online methods since the mid-2000s, which are difficult to assess from a coverage error standpoint. A handful use “probability” methods to recruit a random selection of Americans to ensure representativeness, but most rely on volunteers who sign up to take surveys. These online “nonprobability” polls have proliferated in the election scene and vary greatly in quality. Some go to great lengths to achieve representativeness. Many do not.

Perhaps the biggest source of unmeasurable error in election polls is identifying “likely voters,” the process by which pollsters try to figure out who will vote. The population of voters in the future election simply does not yet exist to be sampled, which means any approximation will come with unknown, unmeasurable (until after the election) errors. Likely voter modeling can easily result in different estimates, depending on the judgment of the decision-makers. In the midst of a pandemic and potential changes in how we vote, estimating who is going to vote could be more difficult than usual.

Because both technology and populations are constantly changing, we might not be able to count on error rates in past elections being indicative of error rates in future elections. Forecasts often rely on past estimates of polling error to model uncertainty, but there is no guarantee history will repeat itself.

We are already seeing this debate play out in 2020. The Economist’s model is the first of this type out for 2020 and has taken some heat for high probability estimates of a Biden win. As its chief architect, G. Elliott Morris, has noted, the pandemic presents a particular challenge in that it cannot be accounted for in the data. He notes that the model could be adjusted to add more uncertainty, but at the same time, he also noted that in order to show substantially lower probabilities the poll variance would have to be set extremely high. Morris and noted statistician Andrew Gelman, who collaborates on the model, acknowledged that the model is a work in progress and adjusted the uncertainty in the model. (The adjustment only moved win probabilities about three percentage points; Biden maintains an 88% chance of winning the Electoral College.)

Nate Silver hinted that FiveThirtyEight’s model will have some data-based ways to adjust for the pandemic, but that model was not released at the time of this writing. Silver later acknowledged that this year is “forcing a lot of ad-hoc decisions for election modelers.” The entire debate confirms that it is very difficult to get the appropriate amount of uncertainty in a model — as Gelman states, “all forecasting methods will have arbitrary choices. There’s no way around it. This is life.”

My own experience is instructive here as well. My forecast model debuted at the beginning of October 2016 in a reasonable place — 84% chance of a Clinton win. Within a couple of weeks, though, the certainty was only going in one direction — up, eventually landing at 98% on Election Day.

I had resisted adding additional sources of error that I deemed subjective, and so my primary mechanism of adding uncertainty was tied to the number of days before the election, under the empirically demonstrated assumption that polls closer to Election Day are more accurate. That meant that as Election Day approached, uncertainty decreased, which sent Clinton’s win probability soaring despite some polls showing the race getting closer. Combined with the state polls in Michigan, Pennsylvania, and Wisconsin missing a Trump win, this spelled disaster for my model’s predictions. If one of your principles is to have a completely data-driven forecast without inserting subjective judgment, you will underestimate error.

Marketing probabilities to the general public

The inability to generate simple measures of poll uncertainty means that any forecast model will become very complex in order to estimate uncertainty appropriately. Most people don’t have a solid understanding of how probability works, and the models are thoroughly inaccessible for those not trained in statistics, no matter how hard writers try to explain it. I tried, too, but my guess is that most people probably closed the page when they got to the word “Bayesian.”

That means when probability outputs are broadcast with great fanfare to the general public, most people will misunderstand them, or will make incorrect assumptions about what they mean, or will filter them through their own biases. And, as Morris notes, the public is quite unlikely to understand that the probability estimates themselves have margins of error. Data visualization specialists work hard to communicate model outcomes well and generate helpful visuals. But at the end of the day, it is still in probability or odds language and often talks about “simulations” as if the whole world knows what that means.

It is little wonder that research shows people are more likely to overestimate the certainty of an election outcome when given a probability than when shown poll results. Moreover, the same research shows that overly certain forecasts could depress voter turnout.

The question of whether forecasts impact turnout on Election Day is uncomfortable, because democratic ideals say that people should vote no matter what to have their voice heard. But what if there’s information saying that a candidate has a 90%+ chance of winning the election? Does that result in some people assuming their vote is not needed and staying home?

I doubt one would find large vote shifts attributable to such an effect, but in the 2016 election three key states (Michigan, Pennsylvania, and Wisconsin) were decided by less than one percentage point apiece. If not for around 80,000 votes, we would not be having this conversation.

Natalie Jackson, Ph.D., is Director of Research at the Public Religion Research Institute (PRRI). She was previously Senior Polling Editor and responsible for election forecasting efforts at The Huffington Post from 2014-2017. Views expressed herein are her own and not representative of any employer, past or present.