How to Combat Overconfidence—One Superforecaster’s Take

How to Combat Overconfidence—One Superforecaster’s Take

Military historian and Superforecaster® Jean-Pierre Beugoms is featured as an exemplar of outstanding thought processes in best-selling author and top Wharton professor Adam Grant’s latest book, Think Again. Below, he shares insights on overconfidence and how it can be avoided by judging the evidence properly.

Jean-Pierre Beugoms

A high-confidence forecast can be fully justified when the evidence supporting it is strong. When the evidence supporting such a forecast is weak, then we can say the forecaster is being overconfident. We can therefore avoid overconfidence by properly judging whether the evidence is strong or weak.

I have found that people often fall into the trap of making overconfident forecasts when they let their gut or intuition do the forecasting for them and when they dismiss or overlook critical information that contradicts their forecast rationales.

A textbook example of overconfidence might be Peter Funt’s March 29, 2021, column in USA Today entitled, “There’s zero chance Joe Biden will run in 2024.” A zero-probability forecast for a Biden reelection campaign may well be an accurate one, but the evidence he uses in support of his forecast is underwhelming.

First, Funt bases his forecast on out-of-date information. He points to Ryan Lizza’s campaign reporting which cites four people who say a reelection campaign is “inconceivable” to Biden, but he ignores the more recent reporting of The Hill, which notes that those close to Biden assume he will run again.

Second, Funt interprets Biden’s declaration that he sees his presidency as a bridge to the next generation of leaders in only one way. That is, as a promise to serve one term. Biden’s statement is ambiguous, however. There is no reason why his bridge cannot encompass two terms.

Third, Funt dismisses Biden’s response to a reporter’s question asking him whether he will run again. Although Biden answered in the affirmative, Funt argues that Biden has no choice but to say yes because, if he says otherwise, he will immediately become a lame-duck president. The argument certainly makes sense, but is it not possible that Biden also means it?

Fourth, Funt completely neglects the “outside view.” He fails to look at what other ambitious people in high office have done. Had he done so, he would have realized that Biden’s decision to pass on a chance at winning reelection would be a highly unusual move even given his advanced age.

In short, had Funt considered the reporting that contradicted his assumptions, he may well have tempered his forecast. On the other hand, an article entitled “There’s a forty-five percent chance Biden will run in 2024” will not receive as many clicks.

The best way to guard against overconfidence in forecasting is to embrace uncertainty. Most people just want to know whether a fact is true or not and whether an event is going to happen or not. Denied this certainty, they will throw up their hands and declare the future as completely unknowable. This kind of thinking will get you into trouble because reality is often not quite as clear-cut.

You will have to get used to thinking in terms of probabilities of truth instead of yes (100%), no (0%), or who knows (50%). When confronted by a person who thinks in black-and-white terms, do not be afraid to say things like, “yes, but it depends,” or “what you say is true, but I am not as categorical as you are,” or “you are wrong, but not completely wrong.”

As part of your forecast rationale, include a list of uncertainties or possible secondary events that would have an effect on your forecast. Whittle down or expand the list as needed. Try this out and you will be well on your way toward being a well-calibrated forecaster.

Look at your forecasts with fresh eyes from time to time. Play the devil’s advocate and challenge the assumptions that undergird your high-confidence forecast. This exercise will help you weigh more objectively those annoying little facts that call your forecast into question.

Here are some facts that may undermine the high-confidence consensus forecast that the Tokyo Olympics and/or Paralympics will go as planned. While it is true that Tokyo is no longer under a state of emergency, the chances of a renewed surge in cases are not negligible, especially since a good many Japanese would not have received their vaccines by the time the games begin. A clear majority of the Japanese public oppose going ahead with the games, owing to these public health concerns. How would the government of Japan react to a public outcry over another wave of cases? While it is true that the political and economic incentives to hold the games are great, are we to believe that this hypothetical event would have no effect on their decision-making?

I am not arguing that anyone should moderate their near-certain yes forecast (e.g., 95%) to a more tentative yes forecast (e.g., 65%). I am saying that going through this exercise of questioning your own forecast, even if it does not result in you changing your forecast at all, will at least give you greater assurance that your high-confidence forecast is a sound one.

Learn to think like a Superforecaster with our Online Training course!
The three short modules provide a solid foundation for novice forecasters in fields such as finance, strategy, and consulting.

A Closer Look at a Superforecaster’s Scientific, Data-based Method

A Closer Look at a Superforecaster’s
Scientific, Data-based Method

Mechanical engineer and Superforecaster® Kjirste Morrell is featured as an exemplar of outstanding thought processes in best-selling author and top Wharton professor Adam Grant’s latest book, Think Again. Below, she shares insights into her data-driven process for reviewing questions and determining her forecast.

Kjirste Morrell is one of the most accurate Superforecasters

Let me use the question on the number of Federal firearms background checks in April through June 2021. This question is based on a specific set of data that is influenced by many potential factors. The topic evokes a strong emotional response which can bring biases into the forecast. Focus on the data first and make sure you’re looking at the right data.

As with most questions that are about a number from a specific site, the very first step is to go to that site and look at the historical data. Get the correct data into a spreadsheet, using whatever method works best for you. Start making plots, such as the number of monthly background checks by year and over time, to see if there is a seasonal trend and to see if there is a general trend.

Even looking at the table of numbers, a few things are obvious, so there might be a temptation to skip plotting. I think plotting is worthwhile here, and the two plots I’ve attached emphasize different aspects for this particular set of data.  A few of the things that I might wonder about with any data collected over several years are whether there is a seasonal trend or if a general trend is apparent. In this case, there are some seasonal effects: there’s a peak in December, with a secondary peak in March, usually. The number of background checks also rises over time with increasing variability, finishing with 2020 and early 2021’s much higher and more variable numbers.

The first question I have is what monthly averages correspond to each bin in the question and how those compare to historical data. Lines at those averages have been added to the figures. In January there were 4.3 M checks, which is the largest monthly amount so far. February was dramatically lower. Similarly, the average of 2.67 M is below any month in the last year. I can imagine events that lead to the total for April-June being either fewer than 8 M or more than 14 M, but some bins may be less likely than others.

Going forward, each month I would check the data at the FBI site and add in a new data point. Consider questions like: Does April rule out another bin or indicate a trend? What are the new monthly averages that would need to be met to end up in each bin?

Once I have a reasonable sense of what the historical data looks like, I like to make a list of factors that could impact the number in question. A few that occur to me here are:

  • Are any new laws going into effect that will require more background checks?
  • Is reported violence rising or falling?
  • Does it seem like fear of future violence/unrest is rising or falling?
  • Has gun control legislation been discussed recently in news outlets?
  • Could supply affect this number?
  • Are there other limiting factors, like max throughput or number of people?
  • What is behind the seasonal peaks, especially March—are there sales events that explain that?
  • Do I understand where the number of background checks comes from?

Understanding what the data represents is especially worthwhile, and there is more information at the FBI site about the background check system. (Starting here: https://www.fbi.gov/services/cjis/nics.) Some of the other reports and statistics may be useful, perhaps something there is a leading indicator, more fine-grained data, or suggests another way of looking at the information.

That’s the process I would go through. Probably I would only get partway on the first pass and then add more when revisiting the question later.

Summary:

  • Go to the FBI link. The most important thing is to know what the data looks like for the source that will be used to settle the question.
  • Download the FBI data and put it into a spreadsheet.
  • Graph yearly and over time
  • Break the bin boundaries into average per month and plot with the data
    • 8 M is an average of 2.67 M/month,
    • 10 M is an average of 3.33 M/month,
    • 12 M is an average of 4 M/month,
    • 14 M is an average of 4.67 M/month
  • How reasonable is it that any of these averages will be met in April-June 2021?
  • As data is added for April & May, does that rule out any bins?
  • Make a list of factors that might affect the data and investigate those.

How would you forecast this question on Good Judgment Open?

Learn to think like a Superforecaster with our Online Training course!
The three short modules provide a solid foundation for novice forecasters in fields such as finance, strategy, and consulting.

Forecasting the Pandemic: Dealing with Death and Other Emotional Subjects

Forecasting the Pandemic: Dealing with Death and Other Emotional Subjects

More than two million people are dead worldwide, with one-fourth of them in the United States. Such are the grim milestones the world has reached in its struggle with COVID-19.

These numbers came as little surprise to professional Superforecasters, who have been making accurate forecasts on these topics up to a year in advance.

“Unprecedented”

The world surpassed two million deaths attributed to COVID-19 on 15 January 2021. The US death toll reached 500,000 people on 22 February.

Good Judgment CEO Warren Hatch on CBS News, 22 June 2020

A year ago, in March 2020, the unfolding crisis was marked by extreme uncertainty and was widely described as “unprecedented.” Yet, Superforecasters have been assigning the highest probabilities to correct outcomes for worldwide cases and deaths since March 2020, and for US cases and death toll since June 2020.

As the pandemic progressed, Good Judgment forecasts proved “eerily accurate,” as Time magazine put it. “We’re on track to hit two million in reported deaths worldwide if things remain as they are,” one Superforecaster wrote on 26 August. “Add in seasonality, and things will get worse.”

And that’s exactly what happened, as we now know, almost five months after that forecast was made.

“Unless the pace starts to slacken dramatically, all signs point to around 525,000 or thereabouts before the end of March,” another Superforecaster wrote on the US death toll question in January 2021.

Superforecasters saw the highest likelihood of more than 53 million COVID-19 cases and 800,000 to 8 million COVID-related deaths worldwide a year in advance.

Forecasting Emotionally Difficult Questions

Of course, the fact that the COVID-19 numbers did not come as a surprise to Superforecasters as a group hardly makes working on such questions any easier.

“Forecasting deaths is difficult emotionally,” one of Good Judgment’s top Superforecasters, Kjirste Morrell, shared in an interview last month. “Rationally, I know my forecast has no effect on the number of deaths, but I still feel bad, slightly guilty, saying how many people I think will die.”

According to Superforecasters, the US has been on track to more than 23 million COVID-19 cases and over 350,000 deaths since June 2020.

For emotionally difficult questions, less adept forecasters frequently fall into a cognitive trap called the “social desirability bias.”

In short, the social desirability bias is a cognitive defense mechanism that, when faced with emotionally painful questions, downplays the negatives—from under-reporting less favorable behavior in surveys to underestimating the extent of catastrophic events in forecasting.

The implications could be vast: from invalidating research findings to underestimating the urgency or extent of required response to major events, such as pandemics, conflicts, or migration crises.

Professional Superforecasters are trained to recognize cognitive biases early and mitigate them through multiple effective strategies.

“Earth as an Ant Farm”

Ryan Adler, one of Good Judgment’s Directors and a Superforecaster himself, has described his strategy when forecasting on emotionally wrenching topics: “I try to think of Earth as an ant farm, with me on the outside looking in. From that perspective, it’s easier to forecast solely on what I think will happen, being influenced as little as possible by what I may hope will happen. India car sales is not a brutal topic. Boko Haram’s reign of terror in West Africa, however, is.”

Marc Koehler, Good Judgment’s Senior Vice President and also a Superforecaster, teaches a similar strategy in workshops the company offers to decision-makers in government, finance, and education institutions: “I try to adopt the point of view of a Martian anthropologist, examining homo sapiens through my telescope.”

“The most difficult part of being a professional Superforecaster is not confusing your preferred outcome with the most likely outcome,” says Jean-Pierre Beugoms, another leading Good Judgment Superforecaster interviewed last month. “The task of leaving out one’s biases is even harder when the most likely outcome is a looming disaster. When that happens, I approach the forecast with the following mindset: Forewarned is forearmed.”

To keep cognitive biases in check, Beugoms writes in advance a list of factors that could make him change his predictions.

“Failing to anticipate disasters has done society much harm.”

Superforecaster Jean-Pierre Beugoms

“I also try to approach these unpleasant questions with a sense of mission,” he adds. “Failing to anticipate disasters has done society much harm. An influential Superforecaster, with a record of correctly predicting disasters while avoiding false alarms, may well help society avoid future disasters or, if this proves impossible, can help it prepare for the recovery.”

Morrell has a similar view: “I remind myself that grim forecasts might result in a different course of action being taken. I can hope that changed conditions will make my forecast incorrect.”

What’s next for the world in the struggle against the COVID-19 pandemic? We continue to forecast the number of cases and deaths from COVID-19 and the rollout of vaccinations worldwide. We have delivered accurate forecasts about the timing of vaccine distribution around the world. Newer questions ask about herd immunity in the US and globally as well as how the new normal will look for “work from home” trends, office vacancy rates, and airport capacity. See the latest from Superforecasters on our Public Dashboard or through our exclusive all-access monitor, FutureFirst™.


Early Views on the Pandemic

Officials and Media

CDC Director Robert Redfield, 30 January 2020: “We still believe the immediate risk to the American public is low.”

New Scientist, 11 February 2020: “Could the new coronavirus really kill 50 million people worldwide? […] The short answer is that no one knows.”

USA Today, 17 February 2020: “Fauci doesn’t want people to worry about coronavirus, the danger of which is ‘just minuscule.’”

The Atlantic, 26 March 2020: “Are we winning the war against COVID-19? In the fog of pandemic, we simply don’t know.” 

President Donald Trump, 27 March 2020: “I’m not sure anybody even knows what it is.”

Scientific American, 1 June 2020: “This coronavirus is unprecedented in the combination of its easy transmissibility, a range of symptoms going from none at all to deadly, and the extent that it has disrupted the world.”

Superforecasters

Cheddar, 28 May 2020: “92 Percent Chance of U.S. Deaths Exceeding 200,000, Superforecaster Says”

Time, 11 June 2020: “’Superforecasters’ Are Making Eerily Accurate Predictions about COVID-19. Our Leaders Could Learn from Their Approach”

CBS News, 22 June 2020: “Forecasting the COVID-19 Pandemic: Firm Predicts 49% Chance that US Coronavirus-Related Deaths Will Exceed 350,000”

Forbes, 13 August 2020: “Superforecasters Predict Vaccine Next Year, Key to the Economic Forecast”

New Statesman, 20 October 2020: “The ‘Superforecasters’ at Good Judgment [Inc] – an American firm that produces impressively accurate predictions of world affairs – put the chances of an approved vaccine being widely available in the US before April 2021 as high as 70 per cent.”