Probabilistic Election Forecasts Are Stupid and You Should Not Look at Them
“…a probabilistic forecast is an expression of uncertainty. If a model gives a candidate a 15 percent chance, you’d expect that candidate to win about one election in every six or seven tries.”
“Consumers can misunderstand the forecasts, since probabilities are famously open to misinterpretation.”
“People associate numbers with precision, so using numbers to express uncertainty in the form of probabilities might not be intuitive.”
These criticisms of probabilistic election forecasting were written by the most famous popularizer of probabilistic election forecasting: Nate Silver.
But they were written seven years ago, in the aftermath of Donald Trump’s surprise victory, which was not predicted by any such model.
And yet here we go again, obsessing and arguing over the slightest changes in Silver’s latest probabilistic forecasts.
I shall submit to you my plea for us to go cold turkey. But first, here’s what’s leading the Washington Monthly website:
***
Debate Preview: The Adult in the Room: Contributing Editor Jonathan Alter expresses confidence that Kamala Harris will give a presidential performance in tonight’s showdown. Click here for the full story.
A Lot of People Underestimated Harris: James D. Zirin, a former federal prosecutor like Harris, argues the Democratic nominee has already proved naysayers wrong. Click here for the full story.
Stop Calling Kamala Harris’s Anti-Price-Gouging Proposal Price Controls: Fordham Law School Professor Zephyr Teachout defends Harris’s plan to tackle monopolistic behavior in the grocery industry. Click here for the full story.
***
Silver-driven panic intensified in Democratic circles last week as his Silver Bulletin forecast (he no longer runs FiveThirtyEight) gave Trump a chance of winning above 60 percent. He crossed the 50 percent threshold in the previous week. As of today Trump’s chances sit at 61.3 percent.
Several other folks are in the probabilistic forecasting game—including FiveThirtyEight, The Hill, The Economist, Brown Political Review, Race to the WH, and JHK Forecasts. None of these give Trump the edge, and most put Harris’s percent chance of winning in the mid-50s.
This doesn’t mean the Harris ones are right and the Trump one is wrong, or vice-versa. All of these models show that the race is extremely close and could go either way.
“We had him with a 30 percent chance [28.6 percent to be precise] and that’s a pretty likely occurrence.” That’s what Silver said about Trump and his forecast after the 2016 election, arguing he should get credit for not fully counting Trump out, when other forecasts essentially did.
Fair enough, but such emphasis on uncertainty certainly would be useful today, with all forecasts showing an even closer race.
Most importantly, we don’t need probabilistic models to tell us about uncertainty. A simple averaging of state and national polls—with repeated reminders about how all polls are only snapshots of a fluid electorate and come with margins of error—would tell us the same, and with less confusion.
And if you don’t like how Real Clear Politics or FiveThirty Eight or The New York Times or the Washington Post handle their poll averages, you could do your own using the polls you deem of good enough quality. It’s not complicated.
But you’ll get the same story: The 2024 presidential election is an extremely close race that could go either way. Because pretty much every national and battleground state poll, and even a few non-battleground state polls, is within the margin of error.
Probabilistic forecasting came with an implicit promise that it is better than a simplistic reliance on polls.
Silver first became a major figure in the 2008 primaries when, as an anonymous blog poster, he predicted the outcomes of several late-season Democratic primaries without the use of polls by extrapolating demographic vote preference data from earlier 2008 primaries and results from previous years. This raised interesting questions about the value of polls.
Since then, he has built general election forecast models which are heavily reliant on polls.
Granted, it would be hard to do otherwise. But that raises a different question:
What can a complex probabilistic forecast tell us that a simple poll average can’t?
On the day before the 2008 election, Silver gave Barack Obama a 96.3 percent chance of winning (though at one point in mid-August, just before the conventions, John McCain had a 52.1 percent chance). He also projected Obama would win 27 states and lead the popular vote by 6.1 points.
This was all pretty close to the mark. He only missed Obama’s wins in Indiana and the 2nd congressional district of Nebraska. And Obama’s popular vote margin was a bit higher at 7.3 points.
But that old (and oft-maligned) dog of polling averages, Real Clear Politics (full disclosure: I have been a RCP contributing columnist), was closer to the mark on the popular vote, calculating a 7.6 point margin. It did slightly worse on the state projections; late polls errantly prompted RCP to shift Indiana and North Carolina out of the Obama column.
In June 2012, Silver’s model had the probability at “just over 60 percent” making Obama a “very slight favorite.” Then, in the final days, the tightening national polls caused much panicking in Democratic circles. However, with Obama’s state leads holding, Silver’s model got more bullish, ending on a 90.9 percent chance of an Obama victory.
Silver called all the states correctly and projected Obama’s national popular vote share of 51 percent on the nose. Real Clear Politics missed Florida and was off by 3 points regarding the popular vote margin. The model looked supreme, though the RCP state averages also made clear that Obama was the favorite.
Then came 2016. Silver’s model not only gave Hillary Clinton a 71.4 percent chance of winning, but also the edge in Pennsylvania, Michigan, Wisconsin, North Carolina, and Florida. Real Clear Politics did better, more accurately giving Trump a slight edge in the two southeast states (with final averages, rightly or wrongly, influenced by a couple of Republican-affiliated pollsters). Both slightly overestimated Clinton’s popular vote total, Silver by a bit more.
Real Clear Politics also did a little better in 2020. Silver (in his last election with the FiveThirtyEight site he founded) gave Joe Biden an 89 percent chance of winning, but overshot Biden’s popular vote margin by 3.5 points, and wrongly gave him the edge in Florida, North Carolina, and Maine’s second congressional district.
RCP’s poll averages also got two states wrong, but was closer to the actual Electoral College count because the mistakes partially offset each other: Florida to Biden and Georgia to Trump. Nationally, RCP was overly optimistic on Biden’s margin by 2.7 points.
My point is not that Real Clear Politics and its poll averages are an inherently more accurate operation than Silver’s modeling business. My point is that not much is gained by sifting poll averages along with additional (and subjectively chosen) data through a probabilistic filter.
To the extent there could be value in offering probabilities, as Silver periodically mentions, it’s to emphasize uncertainty, not in feigning infallibility.
But after several election cycles with widely publicized probabilistic forecasting, we can tell that most consumers don’t focus on the uncertainty part.
Consumers of data journalism crave comfort in numbers. Most data journalists seemingly know better—as Silver’s comments from 2017 show—but also know what the market demands.
So the packaging of election forecast products doesn’t offer much in the way of a proverbial Surgeon General’s warning that contents may offer a false sense of certainty and can be highly addictive.
As with cigarettes, you’re better off not starting.
SHARE THE MONTHLY NEWSLETTER
Spread the word! Forward this email newsletter to your friends, or share the online version from Substack.
FIND THE MONTHLY ON SOCIAL
We’re on Twitter @monthly
We’re on Threads @WAMonthly
We’re on Instagram @WAMonthly
We’re on Facebook @WashingtonMonthly
Best,
Bill
The post Probabilistic Election Forecasts Are Stupid and You Should Not Look at Them appeared first on Washington Monthly.