Superforecasters’ Toolbox: Fermi-ization in Forecasting

Superforecasters’ Toolbox: Fermi-ization in Forecasting

Although usually a very private person, Superforecaster Peter Stamp agreed to be interviewed by a major Polish daily, Rzeczpospolita, on Good Judgment’s request. The reporter started the interview with a pop quiz. He asked Peter to estimate the number of tram cars that serve the city of Warsaw, Poland’s capital. Without using the internet, or having ever been to Warsaw, in under three minutes Peter came up with a remarkably accurate answer (only 10% away from the actual number, according to the reporter, Marek Wierciszewski). All he needed to know for his calculations were the typical size of a Warsaw tram and the relative importance of this means of transportation.

The method Peter used was Fermi-ization, and it is one of the key techniques Superforecasters employ to tackle complex questions even with minimal information.

What Is Fermi-ization?

In his day, physicist Enrico Fermi (1901-1954) was known not only for his groundbreaking contributions to nuclear physics. He was also able to come up with surprisingly accurate estimates using scarce information. The technique he used was elegant in its simplicity: He would break down grand, seemingly intractable questions into smaller sub-questions or components that could be analyzed or researched. He would then make educated guesses about each component until he arrived at his final estimate.

Many science and engineering faculties today teach this method, including through assignments like “estimate the number of square inches of pizza the students will eat during one semester.” Instead of blurting out a random number, students are expected to break the question down into smaller bits and engage with each one to produce a thoughtful answer (in this example, the estimate would depend on such factors as the number of students, the number of pizzas a student would eat per week, and the size of an average pizza).

Fermi-ization is a valuable tool in a Superforecaster’s toolbox. Since the days of the original Good Judgment Project and continuing in Good Judgment Inc’s work today, Superforecasters have proved the usefulness of this technique in producing accurate forecasts on seemingly impossible questions—from the scale of bird-flu epidemics, oil prices, and interest rates to election outcomes, regional conflict, and vaccinations during the Covid-19 pandemic.

Uses of Fermi-ization in Forecasting

In their seminal book Superforecasting, Philip Tetlock and Dan Gardner list Fermi-ization as the second of the Ten Commandments for Aspiring Superforecasters. This placement is not a coincidence. In the world of Superforecasters—experts known for their consistently accurate forecasts—Fermi-ization is a fundamental tool, enabling them to arrive at accurate predictions even in response to questions that initially seem impossible to quantify.

“Channel the playful but disciplined spirit of Enrico Fermi,” Tetlock and Gardner write. “Decompose the problem into its knowable and unknowable parts. Flush ignorance into the open. Expose and examine your assumptions. Dare to be wrong by making your best guesses. Better to discover errors quickly than to hide them behind vague verbiage.”

Depending on the question, this process can take just a few minutes, as it did when Peter worked out an estimated number of Warsaw’s tram cars, or it could be methodical, slow, and painstaking. But it is an invaluable road map whether accuracy is the goal.

Fermi-ization in forecasting has multiple uses:

    • It helps the forecaster to avoid the classic cognitive trap of relying on quick-and-easy—and often incorrect!—answers where more thought is called for.
    • It forces the forecaster to sort the relevant components from the irrelevant ones.
    • It enables the forecaster to separate the elements of the question that are knowable from those that are unknowable.
    • It makes the forecasters examine their assumptions more carefully and pushes them toward making educated—rather than blind—guesses.
    • It informs both the outside and the inside view in approaching the question.


Three Steps in Fermi-ization

Fermi-ization becomes easier and increasingly effective with practice. Keep these three steps in mind as you give it a try.

    1. Unpack the question by asking, “What would it take for the answer to be yes? What would it take for it to be no?” or “What information would allow me to answer the question?”
    2. Give each scenario your best estimate.
    3. Dare to be wrong.


Not the Only Tool

Of course, Fermi-ization is not the only tool in a Superforecaster’s toolbox. Mitigation of cognitive biases, ability to recognize and minimize noise, being actively open-minded, and keeping scores are all crucial components of the Superforecasting process. You can learn these techniques during one of our Superforecasting Workshops, or you can pose your own questions for Superforecasters to engage with through a subscription to FutureFirst™.

Superforecasters: Still Crème de la Crème Six Years On

Superforecasters: Still Crème de la Crème Six Years On

The multi-year geopolitical forecasting tournament sponsored by the research arm of the US Intelligence Community (IARPA) that led to the groundbreaking discovery of “Superforecasters” ended in 2015. Since then, public and private forecasting platforms and wisdom-of-the-crowd techniques have only proliferated. Six years on, are Good Judgment’s Superforecasters still more accurate than a group of regular forecasters? What, if anything, sets their forecasts apart from the forecasts of a large crowd?

A bar graph showing the Superforecasters' error scores are lower than those of regular forecasters
From the paper: Superforecasters’ accuracy outstrips wisdom-of-the-crowd scores.

A new white paper by Dr. Chris Karvetski, senior data and decision scientist with Good Judgment Inc (GJ Inc), compares six years’ worth of forecasts on the GJ Inc Superforecaster platform and the GJ Open public forecasting platform to answer these questions.

Key takeaway: Superforecasters, while a comparatively small group, are significantly more accurate than their GJ Open forecasting peers. The analysis shows they can forecast outcomes 300 days prior to resolution better than their peers do at 30 days from resolution.

Who are “Superforecasters”?

During the IARPA tournament, Superforecasters routinely placed in the top 2% of accuracy among their peers and were a winning component of the experimental research program of the Good Judgment Project, one of five teams that competed in the initial tournaments. Notably, these elite forecasters were over 30% more accurate than US intelligence analysts forecasting the same events with access to classified information.

Key Findings

Calibration plot showing the Superforecasters are 79% closer to perfect calibration
From the paper: Regular forecasters tend to show overconfidence, whereas the Superforecasters are close to perfect calibration.

Dr. Karvetski’s analysis presented in “Superforecasters: A Decade of Stochastic Dominance” uses forecasting data over a six-year period (2015-2021) on 108 geopolitical forecasting questions that were posted simultaneously on Good Judgment Inc’s Superforecaster platform (available to FutureFirst™ clients) as well as the Good Judgment Open (GJ Open) forecasting platform, an online forecasting platform that allows anyone to sign up, make forecasts, and track their accuracy over time and against their peers.

The data showed:

  • Despite being relatively small in number, the Superforecasters are much more prolific, and make almost four times more forecasts per question versus GJ Open forecasters.
  • They are also much more likely to update their beliefs via small, incremental changes to their forecast.
  • Based on the Superforecasters’ daily average error scores, they are 35.9% more accurate than their GJ Open counterparts.
  • Aggregation has a notably larger effect on GJ Open forecasters; yet, the Superforecaster aggregate forecasts are, on average, 25.1% more accurate than the aggregate forecasts using GJ Open forecasts.
  • The average error score for GJ Open forecasters at 30 days from resolution is larger than any of the average error scores of Superforecasters on any day up to 300 days prior to resolution.
  • GJ Open forecasters, in general, were over-confident in their forecasts. The Superforecasters, in contrast, are 79% better calibrated. “This implies a forecast from Superforecasters can be taken at its probabilistic face value,” Dr. Karvetski explains.
  • Finally, the amount of between-forecaster noise is minimal, implying the Superforecasters are better at translating the variety of different signals into a numeric estimate of chance.

You can read the full paper here.

Where Can I Learn More About Superforecasting?

Subscription to FutureFirst, Good Judgment’s exclusive monitoring tool, gives clients 24/7 access to Superforecasters’ forecasts to help companies and organizations quantify risk, improve judgment, and make better decisions about future events.

Our Superforecasting workshops incorporate Good Judgment research findings and practical Superforecaster know-how. Learn more about private workshops, tailored to the needs of your organization, or public workshops that we offer.

A journey to becoming a Superforecaster begins at GJ Open. Learn more about how to become a Superforecaster.

Books on Making Better Decisions

Books on Making Better Decisions: Good Judgment’s Back-to-School Edition

Since the publication of Tetlock and Gardner’s seminal Superforecasting: The Art and Science of Prediction, many books and articles have been written about the ground-breaking findings of the Good Judgment Project, its corporate successor Good Judgment Inc, and the Superforecasters.

This is not surprising: Decision-makers have a lot to learn from the Superforecasters. Thanks to being actively open-minded and unafraid to rethink their conclusions, the Superforecasters have been able to make accurate predictions where experts often failed. They know how to think in probabilities (or “in bets”), reduce the noise in their judgments, and mitigate cognitive biases such as overconfidence. As Tetlock and Good Judgment Inc have shown, these are skills that can be learned.

Here is a short list of eight notable books that present a wealth of information on ways to evaluate an uncertain future and improve decision-making.

In 2011, IARPA—the research arm of the US intelligence community—launched a massive competition to identify cutting-edge methods to forecast geopolitical events. Four years, 500 questions, and over a million forecasts later, the Good Judgment Project (GJP)—led by Philip Tetlock and Barbara Mellers at the University of Pennsylvania—emerged as the undisputed victor in the tournament. GJP’s forecasts were so accurate that they even outperformed those of intelligence analysts with access to classified data. One of the biggest discoveries of GJP were the Superforecasters: GJP research found compelling evidence that some people are exceptionally skilled at assigning realistic probabilities to possible outcomes—even on topics outside their primary subject-matter training.

In their New York Times bestseller, Superforecasting, our cofounder Philip Tetlock and his colleague Dan Gardner profile several of these talented forecasters, describing the attributes they share, including open-minded thinking, and argue that forecasting is a skill to be cultivated, rather than an inborn aptitude.

Noise, defined as unwanted variability in judgments, can be corrosive to decision-making. Yet, unlike its better-known companion, bias, it often remains undetected—and therefore unmitigated—in decision processes. In addition to research-based insights into better decision-making and remedies to identify and reduce noise as a source of error, Kahneman and his colleagues take a close look at a select group of forecasters—the  Superforecasters—whose judgments are not only less biased but also less noisy than those of most decision-makers. As co-author of Noise Cass Sunstein says, “Superforecasters are less noisy—they don’t show the variability that the rest of us show. They’re very smart; but also, very importantly, they don’t think in terms of ‘yes’ or ‘no’ but in terms of probability.”

Intelligence is often seen as the ability to think and learn, but in a rapidly changing world, there’s another set of cognitive skills that might matter more: the ability to rethink and unlearn. As an organizational psychologist, Adam Grant investigates how we can embrace the joy of being wrong, bring nuance to charged conversations, and build schools, workplaces, and communities of lifelong learners. He also profiles Good Judgment Inc’s Superforecasters Kjirste Morrell and Jean-Pierre Beugoms, who embody the outstanding thought processes suggested in the book. You can read more about Morrell and Beugoms in our interviews here.

David Epstein examines the world’s most successful athletes, artists, musicians, inventors, and forecasters to show that in most fields—especially those that are complex and unpredictable—generalists, not specialists, are primed to excel. In a chapter about the failure of expert predictions, he discusses Phil Tetlock’s research, the GJP, and how “a small group of foxiest forecasters—just bright people with wide-ranging interests and reading habits—destroyed the competition” in the IARPA tournament. Good Judgment Inc’s Superforecasters Scott Eastman and Ellen Cousins, profiled in the book, weigh in on such topics as curiosity, aggregating perspectives, and learning from specialists without being swayed by their often narrow worldviews.

Other books that mention Superforecasting, Good Judgment Inc, or Good Judgment Project