Skip links

How Accurate Were the Political Science Forecasts of the 2016 Presidential Election?

With the dust settling from one of the most brutal and nasty presidential campaigns in modern American history and with the late vote returns creeping up to a final count, it is time to take stock of the presidential election forecasts offered initially to readers of the Crystal Ball website and then published in the October issue of PS: Political Science and Politics. Despite the surprising electoral vote victory of Donald Trump, the vote count as of one week after the election indicates that Democratic nominee Hillary Clinton received 50.5% of the two-party popular vote cast nationwide to Republican President-elect (yes, it is still jolting) Trump’s 49.5%.

So how did the forecasts do? From late June to early September in Sabato’s Crystal Ball, eight forecasters or teams of forecasters issued 10 presidential election forecasts of the national two-party popular vote (along with the PollyVote meta-forecast assembled from array of different types of forecasts). Aside from a few minor updates, these were the same forecasts later published in PS (in no case did the difference between the Crystal Ball and PS reported forecast differ by more than two-tenths of a percentage point). Table 1 reports the forecasts from the closest to the actual vote division as it appears at this time to the forecast with the largest absolute error.

Table 1: Political science forecasts of the 2016 presidential election

Notes: *As of noon on Nov. 16, 2016, the two-party vote for Hillary Clinton was 50.5% (with 130.5 million total votes reported) as calculated from data made available from official sources gathered by David Wasserman. **A preliminary forecast from Lewis-Beck and Tien reported in mid-August was 51.1%. Their final and “official” forecast published in PS and presented at the American Political Science Association meeting is used here.

In an election with plenty of ups and downs in the polls and more than its share of controversies, from the revelation of a salacious old audio tape of Donald Trump to an off-again, on-again FBI probe of Hillary Clinton, and with non-academic daily-changing “forecasts” bouncing around erratically, the political science presidential forecasts generally fared quite well and several were extremely accurate. Five of the 10 forecasts were within one percentage point of the actual vote. These include forecasts by Lockerbie, the Jeromes, Lewis-Beck and Tien as well as the forecasts from my two models. Three of these forecasts missed the actual vote by less than half of a percentage point. Another three of the forecasts (Abramowitz and the two entries by Erikson and Wlezien) were within two points of the vote. Holbrook’s forecast was two points off the vote. Norpoth’s forecast of a Trump popular vote majority had the largest vote percentage error, though it was made in early March, more than 35 weeks before the election, and was still within three points of the actual vote.

These numbers look pretty good as a group and, in some cases, look stellar, but how do they compare to other predictions this year? In comparing them to three other election forecasting enterprises, the political science forecasts this year were generally more accurate.

The first comparison is Armstrong, Cuzan, Graefe, and Jones’ PollyVote forecast that was reported along with the individual forecasts in our Crystal Ball and PS collection. PollyVote is a composite aggregation prediction based on forecasts of varying methods from experts and prediction markets to the kind of econometric models in our political science collection. PollyVote in late September, as the table indicates, predicted a 52.6% vote for Clinton. The Election Day forecast was still at 52.5%. With only two exceptions (Holbrook and Norpoth), each forecast in the Crystal Ball/PS collection made between 60 and 133 days before the election was more accurate than the PollyVote composite forecast, even that offered on Election Day. PollyVote and Holbrook did not appreciably differ and PollyVote’s forecast late in the campaign was just slightly more accurate than Norpoth’s forecast made more than half a year earlier.

The political science forecasts, again made months before Election Day, were also much more accurate than the polls, even the polls taken on the day before the election. The election eve RealClearPolitics average of the 10 major national polls had the race at 46.8% Clinton to 43.6% Trump. This translates into a 51.8% Clinton two-party popular vote. This was a 1.3 percentage point error on Election Day. This error was a good deal larger than five of the 10 forecasts made at least two months earlier and several made much earlier than that. Both forecasts made by Erikson and Wlezien were about as accurate as the RCP Election Day poll average and were available far earlier. The Holbrook forecast made two months before the election and the Abramowitz forecast made over three months in advance of the election were only marginally less accurate than the RCP Election Day poll average. Only the extremely early Norpoth forecast, ironically forecasting a Trump victory, was substantially less accurate than the final RCP poll average.

The political science forecasts can also be compared to the non-academic forecasts. Perhaps the most well-known of these is Nate Silver’s FiveThirtyEight forecasts. I examined his “Polls-Plus” forecasts made after the nominating conventions in late July (102 days before the election) through to the first presidential debate in late September (43 days before the election), well after all of the political science forecasts were in and set. Since the “Polls-Plus” forecasts were churned out on a daily basis, any meaningful comparison requires summarizing this blizzard of forecasts in some way. I used the median. The median 538 “Polls-Plus” forecast from the second convention to the first debate span of the campaign was 51.8% of the two-party vote for Clinton. Some days the forecast was as low as 50.7% and on others it was as high as 52.5%, but if you just randomly checked in, it predicted about 51.8%. This 1.3 percentage point error (the same as the Election Day polling error), was larger than five of the ten political science forecasts and comparable to the Erikson and Wlezien’s two forecasts. The forecasts by Abramowitz and Holbrook were slightly less accurate than the “Polls-Plus” median. Only Norpoth’s extraordinarily early forecast was substantially less accurate than FiveThirtyEight.

In sum, in addition to being stable, transparent, and early, most of the political science presidential election forecasts were quite accurate this year. Several were, by any standard, extremely accurate. And though some fared better than others, none crashed. With some exceptions, the accuracy of the political science presidential election forecasts also compared favorably to the alternative forecasting methods assembled in PollyVote, the major national polls collected by RealClearPolitics as late as Election Day, and the “Polls-Plus” forecasts of FiveThirtyEight.