How the Republicans Could Still Hold the White House and Senate

What if there’s no Blue Wave?

Over the week before the election, Good Judgment’s professional Superforecasters engaged in an extensive “pre-mortem” or “what-if” exercise regarding our forecasts for a Blue-Wave election. We asked the Superforecasters to imagine that they could time-travel to a future in which the 2020 election results are final, with the Republicans retaining both the White House and control of the Senate. Then, we asked them to “explain” why the outcomes differed from the most likely outcomes in their pre-election probabilistic forecasts. Thinking through these scenarios now, before the actual outcomes are known, helps to avoid hindsight bias (the tendency to view what actually happened as being more inevitable than it was).

If we had asked the same questions six months earlier, the Superforecasters would have responded differently because there would have been many more uncertainties still in play. In April, we didn’t know whether progressives would fully back the Democratic ticket. We didn’t know that former Vice President Biden would choose Senator Kamala Harris as his running mate. We certainly didn’t know how the COVID pandemic and its economic and social fallout would evolve. And, the possibilities of an “October surprise” were wide open.

Our what-if exercise unsurprisingly focused on the factors that remained most uncertain a week before Election Day. For each election question (Presidency and control of Senate), the factors Superforecasters most frequently cited for the “wrong side of maybe” outcomes fell into three categories, summarized in the charts below.

Here’s how the Superforecasters talked about each of the possible explanations for the Presidency and control of the Senate to remain in Republican hands, despite our high probability estimates that the Democrats would prevail in each case.

    1. We underestimated the likelihood that “close races plus voting mechanics complications [would] lead to judicialization of the election result.” (Explanations generally consistent with this commenter’s view received 33% of the upvotes on the Presidential side and 16% of the upvotes on the Senate side.)

We already know quite a bit about early turnout, thanks to great work by people like Michael McDonald of the University of Florida. But we don’t know how many more people will cast votes. And, critically, we don’t know how many ballots will be disqualified because they arrived past the moving goalpost for eligibility or failed to meet some other requirement. Superforecasters expect the opposing camps to litigate these issues wherever the preliminary vote counts are close enough that disputed ballots could change the outcome. If our less-than-20% scenario for the Republicans to retain the White House materializes, many Superforecasters imagine that “a much-larger-than-expected percentage of mail-in ballots were rejected in the battleground states.”

Similarly, if the Republicans hold the Senate, Superforecasters anticipate that “optimization of Republican control of important election processes in key Senate battleground states: ballot collection processes, court decisions, restrictions on voter eligibility, etc.” will have played an important role. As with the Presidential race, they imagine that “mail-in voting complications [could] lead to undercounting of Democratic votes.” (Like most observers, they expect that Democrats are more likely than Republicans to vote by mail.)

By the way, “judicialization of the election result” does not imply a single Supreme Court decision that determines the outcome of the race. Rather, a host of state and federal court rulings already have affected which votes will count, and Superforecasters expect the courts to become involved in even more such decisions before this election is decided.

    1. We placed too much reliance on “polls that were less accurate than believed, even taking into account the fact that everyone knows they aren’t perfect, leading to people underestimating the chances of the party behind in the polls.” (26% of all upvotes – Presidency; 16% – Senate)

Superforecasters acknowledge that “most of us are influenced by models like 538, The Economist, and others that rely primarily on polls” when forecasting US elections. But we can’t determine exactly how much Good Judgment’s election forecasts rely on a particular poll or polls in general because our Superforecasts combine and weight the probabilities assigned by individual Superforecasters to arrive at a collective estimate of the likelihood of the various outcomes.

Recognizing that the 2016 polls favored a Clinton victory, most Superforecasters have discounted 2020 polls that show extremely high odds of a Biden win. Even so, if President Trump is re-elected and/or the Republicans retain control of the Senate, it is almost definitional that the polls will have been “wrong” in some way.

What polling “errors” do Superforecasters imagine to have occurred in those scenarios? The notion of the “shy Trump voter” was the single most frequently upvoted explanation: “Poll numbers systematically under-counted Republicans, due to either unwillingness to participate in polls or unwillingness to disclose support for Trump.” Other polling-related explanations included “underestimating differences between opinion polls and voter mobilization” and the potential failure of polling-related models such as those of 538 to account for the unique circumstances of holding an election in a pandemic year.

In the case of the Senate, their polling-related explanations tend to see any errors in less stark terms than they envision for the Presidential race. The most upvoted comments in this category included “polls whiffed in the key swing states (a little more than slightly), which brought the race(s) well within the margin of error” and “GOP overperforms the polls, within the margin of error, in the Sunbelt close races (particularly the runoffs in Georgia in January) and Iowa.”

    1. We underestimated the effectiveness of the Republican election strategy. (11% of all upvotes re the Presidential race)

Superforecasters anticipate that President Trump’s re-election, should it occur, would owe more to the effectiveness of the Republican election strategy than they had credited. The most commonly cited strategic “secret weapon” was the Republican “ground game,” with “door-to-door canvassing” proving to be more potent than expected. They also cited the Trump campaign’s “digital advertising” as a factor that they may have underestimated.

    1. We underestimated the extent of split voting (voting for Biden, but for a Republican Senatorial candidate). (29% of all upvotes on the Senate question)

The Superforecasters already project a lower probability for the Democrats to take back the Senate than they do for a Biden victory. But that’s an apples-and-oranges comparison because only 35 of the 100 Senate seats are up for election in 2020, whereas the entire country is at play for the Presidency.

When imagining that the Republicans retain control of the Senate, Superforecasters most commonly anticipate that our forecasts may have underestimated split, or crossover, voting. They see two aspects here: first, voters think more positively about some Republican Senatorial candidates than they do about President Trump; and second, some voters are “disillusioned” with the President but want to see the Republicans hold onto the Senate as a check against a Democratic President and House of Representatives.

The Bottom Line

In this “what-if” exercise, Good Judgment asked the Superforecasters to assume for the sake of argument that the Republicans maintain control of both the White House and the Senate. A key goal of this exercise is to nudge forecasters to rethink the reasoning and evidence supporting their forecasts with an eye to adjusting their probability estimates. Yet, even after several days of internal debate, only a few Superforecasters lowered their estimated odds of a Blue-Wave election. Our aggregate forecasts barely moved.

In other words, a week before the election, Good Judgment’s Superforecasters think our current election forecasts are pretty well calibrated. They don’t see the outcomes as locked in by any means, but they are confident that their current levels of confidence are appropriate.

No single forecast is ever right or wrong unless it is expressed in terms of absolute certainty (0% or 100%). If the true probability that President Trump will be re-elected is only 13% (our forecast as of November 1st), he would win the election 13 out of 100 times if we could re-run history repeatedly. That’s why forecasting accuracy is best judged over large numbers of questions.

We’ve looked at the accuracy of our Superforecasts over hundreds of questions and have yet to find any forecasting method that can beat them consistently. The Superforecasters know what they know – and what they don’t know. When it comes to handicapping the odds for geopolitical and economic events, they’re the best bookies around. Lacking a crystal ball, you’d be wise to give their forecasts serious consideration.

How to Become a Superforecaster®

How to Become a Superforecaster®

A BBC listener asked the team behind their “CrowdScience” radio show and podcast whether she might, in fact, be a Superforecaster. She’s not the first to wonder if she would qualify – and not the first to be curious about how Good Judgment spots and recruits superior forecasting talent.

For all curious souls out there, here’s the inside scoop.

Superforecaster Origins: the Good Judgment Project

Superforecasters were a surprise discovery of the Good Judgment Project (GJP), the research-and-development project that preceded Good Judgment Inc. GJP was the winner of the massive US-government-sponsored four-year geopolitical forecasting tournament known as ACE.

As our co-founder Phil Tetlock explains in his bestseller Superforecasting: The Art and Science of Prediction, we set up GJP as a controlled experiment with random assignment of participants. Results from the first year of the tournament established that teaming and training could boost forecasting accuracy. Therefore, we selected GJP superforecasters from the top 2% of forecasters in each experimental condition to account for the advantages of having been on a team and/or received training. To minimize the chance that outstanding accuracy resulted from luck rather than skill, we limited eligibility for GJP superforecaster status to those forecasters who participated in at least 50 forecasting questions during a tournament “season.”

Professional Superforecaster Selection

When the ACE tournament ended in mid-2015, Good Judgment Inc invited the forecasters with the best-established track records to become the core of our professional Superforecaster contingent. Most of Good Judgment Inc’s 150+ professional Superforecasters qualified through their relative accuracy during GJP. They had not only earned GJP superforecaster status over one tournament season, but also confirmed their accuracy over 50+ questions in a second year of forecasting.

Good Judgment has continued to identify and recruit new professional Superforecasters since the ACE tournament ended via our public forecasting platform, Good Judgment Open. Many of these new recruits never had a chance to participate in ACE − and they have performed every bit as well as the “original” GJP superforecasters.

Do You Have What It Takes?

Each autumn, Good Judgment will identify and recruit potential Superforecasters from the ranks of GJ Open forecasters. If you think you have what it takes to be a Superforecaster, put that belief to an objective test: forecast on at least 100 GJ Open questions (cumulatively, not necessarily in one year).

Those who consistently outperform the crowd are automatically eligible for our annual pro Super selection process and potentially a trial engagement. Those who successfully complete a three-month probation period will become full-fledged Superforecasters.

So, there you have it. Becoming a professional Superforecaster is the ultimate meritocracy. We don’t care where you live (as long as you have reliable Internet access!). When evaluating your qualifications for Superforecaster status, we ignore your gender, age, race, religion, and even education. (We are, however, keen to have an increasingly diverse pool of professional Superforecasters and encourage people of all backgrounds to test their skills on GJ Open!) We simply want to find the world’s most accurate forecasters – and to nurture their talents in a collaborative environment with other highly skilled professionals.

… If You’re Still Curious

While you’re pondering whether you have what it takes to be a Superforecaster, check out the BBC CrowdScience podcast episode that answers their listener’s question.

And, as always, you can visit our Superforecaster Analytics page to learn more about how Good Judgment’s professional Superforecasters provide early insights and well-calibrated probability estimates about key risks and opportunities to help governments, corporate clients, and NGOs make better decisions.

Ten Commandments for Aspiring Superforecasters

Ten Commandments for Aspiring Superforecasters

In Superforecasting: The Art and Science of Prediction, Good Judgment co-founder Philip Tetlock and his co-author Dan Gardner summarize the Good Judgment Project research findings in the form of “Ten Commandments for Aspiring Superforecasters.” These commandments describe behaviors that have been “experimentally demonstrated to boost [forecasting] accuracy.” You can learn more about these commandments—and practice applying them under the guidance of professional Superforecasters—at one of Good Judgment’s training workshops.

1. Triage

Focus on questions where your hard work is likely to pay off. Don’t waste time either on easy “clocklike” questions (where simple rules of thumb can get you close to the right answer) or on impenetrable “cloud-like” questions (where even fancy statistical models can’t beat the dart-throwing chimp). Concentrate on questions in the Goldilocks zone of difficulty, where effort pays off the most.

2. Break seemingly intractable problems into tractable sub-problems.

Channel the playful but disciplined spirit of Enrico Fermi who—when he wasn’t designing the world’s first atomic reactor—loved ballparking answers to head-scratchers such as “How many extraterrestrial civilizations exist in the universe?” Break apart the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them behind vague verbiage.

3. Strike the right balance between inside and outside views.

Superforecasters know that there is nothing new under the sun. Nothing is 100% “unique.” Language purists be damned: uniqueness is a matter of degree. So Superforecasters conduct creative searches for comparison classes even for seemingly unique events, such as the outcome of a hunt for a high-profile terrorist (Joseph Kony) or the standoff between a new socialist government in Athens and Greece’s creditors. Superforecasters are in the habit of posing the outside-view question: How often do things of this sort happen in situations of this sort?

4. Strike the right balance between under- and overreacting to evidence.

Belief updating is to good forecasting as brushing and flossing are to good dental hygiene. It can be boring, occasionally uncomfortable, but it pays off in the long term. That said, don’t suppose that belief updating is always easy because it sometimes is. Skillful updating requires teasing subtle signals from noisy news flows— all the while resisting the lure of wishful thinking.

5. Look for the clashing causal forces at work in each problem.

For every good policy argument, there is typically a counterargument that is at least worth acknowledging. For instance, if you are a devout dove who believes that threatening military action never brings peace, be open to the possibility that you might be wrong about Iran. And the same advice applies if you are a devout hawk who believes that soft “appeasement” policies never pay off. Each side should list, in advance, the signs that would nudge them toward the other.

6. Strive to distinguish as many degrees of doubt as the problem permits but no more.

As in poker, you have an advantage if you are better than your competitors at separating 60/40 bets from 40/60—or 55/45 from 45/55. Translating vague-verbiage hunches into numeric probabilities feels unnatural at first, but it can be done. It just requires patience and practice. The Superforecasters have shown what is possible.

7. Strike the right balance between under- and overconfidence, between prudence and decisiveness.

Superforecasters understand the risks both of rushing to judgment and of dawdling too long near “maybe.” They routinely manage the trade-off between the need to take decisive stands (who wants to listen to a waffler?) and the need to qualify their stands (who wants to listen to a blowhard?). They realize that long-term accuracy requires getting good scores on both calibration and resolution—which requires moving beyond blame-game ping-pong. It is not enough just to avoid the most recent mistake. They have to find creative ways to tamp down both types of forecasting errors—misses and false alarms—to the degree a fickle world permits such uncontroversial improvements in accuracy.

8. Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.

Don’t try to justify or excuse your failures. Own them! Conduct unflinching postmortems: Where exactly did I go wrong? And remember that although the more common error is to learn too little from failure and to overlook flaws in your basic assumptions, it is also possible to learn too much (you may have been basically on the right track but made a minor technical mistake that had big ramifications). Don’t forget to do postmortems on your successes, too. Not all successes imply that your reasoning was right. You may have just lucked out by making offsetting errors.

9. Bring out the best in others and let others bring out the best in you.

Master the fine art of team management, especially perspective taking (understanding the arguments of the other side so well that you can reproduce them to the other’s satisfaction), precision questioning (helping others to clarify their arguments so they are not misunderstood), and constructive confrontation (learning to disagree without being disagreeable). Wise leaders know how fine the line can be between a helpful suggestion and micromanagerial meddling or between a rigid group and a decisive one or between a scatterbrained group and an open-minded one.

10. Master the error-balancing bicycle.

Implementing each commandment requires balancing opposing errors. Just as you can’t learn to ride a bicycle by reading a physics textbook, you can’t become a superforecaster by reading training manuals. Learning requires doing, with good feedback that leaves no ambiguity about whether you are succeeding—“I’m rolling along smoothly!”—or whether you are failing—“crash!” Also remember that practice is not just going through the motions of making forecasts, or casually reading the news and tossing out probabilities. Like all other known forms of expertise, superforecasting is the product of deep, deliberative practice.

For those ready to engage in such deep, deliberative practice, our public forecasting site Good Judgment Open offers friendly competition against other aspiring Superforecasters in topical forecasting challenges covering finance and economics, geopolitics, popular culture, and many more topics. The most accurate Good Judgment Open forecasters also have the opportunity to join the ranks of Good Judgment’s professional Superforecasters, the most accurate forecasters in the business!

But don’t forget the “11th commandment”!

“It is impossible to lay down binding rules,” Helmuth von Moltke warned, “because two cases will never be exactly the same.”