- Client Sign In
Americans today are more polarized than ever, and their split along two ideological extremes complicates a forecaster’s job. Polarization stresses feelings over facts, confounding the separation of signal from noise that’s essential to forecasting accuracy. Also, the forecaster’s own biases and preferences can be harder to recognize—and set aside—when society at large is polarized and the outcomes are personally consequential.
When Good Judgment Inc, a forecasting company with an unrivalled track record of accuracy, asked its professional Superforecasters to predict the outcome of the 2020 US election cycle, these challenges were front and center. Many Superforecasters live in the United States and feel deeply about political issues in the country. Some of them worried this could cloud their forecasting judgment. But Superforecasters thrive in the face of challenges. Here is what they did, and what you can do to improve the accuracy of your own predictions in a polarized world.
US Election 2020: Getting It Right
But getting it right is only half of the picture. Good Judgment strives not only to be right but also to be right for the right reasons. When polarization abounds, this is all the more important. To calibrate their thinking, Superforecasters use three simple strategies that consistently result in more accurate predictions.
While the Superforecasters as a group assigned high odds for a Democratic sweep, individual Superforecasters predicted a variety of outcomes. A diversity of views is essential for good forecasting, but on issues you hold dear, considering other views is easier said than done. Over the week before the election, Good Judgment asked the Superforecasters as a group to imagine they could time-travel to a future in which the Republicans retained both the White House and the Senate. Regardless of their individual forecasts, they were then asked to explain why a “Blue Wave” election failed to occur in such a future.
This is called a pre-mortem, or “what if,” exercise. Thinking through alternative scenarios ahead of the actual outcome accomplishes several goals. It forces the forecaster to consider other perspectives, to rethink the reasoning and evidence supporting their forecasts. It also tests the forecaster’s level of confidence (over-confidence being a far more common issue than under-confidence) and helps avoid hindsight bias when evaluating the forecasts later.
Because Superforecasters already weigh multiple alternatives in making forecasts, this pre-mortem produced little change in the overall forecasts. Even after several days of internal debate on the “what if” scenarios, their aggregate probabilities barely moved.
But the exercise was useful. It showed that the Superforecasters’ predictions were well calibrated. It also produced multiple scenarios with detailed commentary, some of which proved clear-eyed in light of the actual events following the election.
Kjirste Morrell, one of Good Judgment’s leading Superforecasters, was among the participants in the exercise. She says she didn’t make large changes to her forecasts but underscores the value of the discussion.
“In retrospect, I should have placed more credence on the possibility of violence after the election, which was mentioned during the pre-mortem exercise,” she says.
Keep It Civil
All forecasters can master this trait, as witnessed on our public forecasting platform, GJ Open. Throughout the 2020 election cycle, moderators observed very few comments that fell outside the reasonable bounds of civil discourse. This relative civility on GJ Open may surprise those accustomed to the rough-and-tumble of the Twitterverse. But it comes as no shock to Good Judgment’s co-founder Barb Mellers, whose research suggests that forecasting tournaments can reduce political polarization.
As the election cycle intensified and the public debate grew more heated and personal elsewhere on social media, GJ Open continued to emphasize facts and reasoned argument. It showed that forecasters can learn to remain focused on what matters to the accuracy of their predictions and block out the noise of inflammatory rhetoric.
Keeping score is essential to good forecasting, says Good Judgment’s co-founder Philip E. Tetlock. Superforecasters are not the only professionals who recognize this. Weather forecasters, bridge players, and internal auditors all know that tracking prediction outcomes and getting timely feedback are strategies that improve forecasting performance. Superforecasters use quantifiable probabilities to express their forecasts and Brier scores to measure accuracy. Keeping score enables forecasters and companies to learn from past mistakes and to calibrate their forecasts in the future.
No single forecast is truly right or wrong unless it is expressed in terms of absolute certainty (0% or 100%). If the probability of President Trump being re-elected were 13% (Good Judgment’s forecast as of 1 November), he would win the election 13 out of 100 times if we could re-run history repeatedly. That’s why forecasting accuracy is best judged over large numbers of questions.
The Superforecasters’ accuracy has been scrutinized over hundreds and hundreds of questions, and a forecasting method that can beat them consistently has yet to be found. The Superforecasters know what they know—and what they don’t know. They know how to think through alternative scenarios and how to “disagree without being disagreeable.” They also know the importance of keeping score. When it comes to calculating the odds for even highly polarized topics, their process shows how best practices deliver the best accuracy.
* This article originally appeared in Luckbox Magazine and is shared with their permission.
Schedule a consultation to learn how our FutureFirst monitoring tool, custom Superforecasts, and training services can help your organization make better decisions.