A Year in Review

Good Judgment Inc: A Year in Review

From our headquarters in Manhattan to Canada to Brazil and points in between, the Good Judgment team had a productive and exciting year in 2021. Here are some of the key developments and projects we worked on in the past year.

FutureFirst Launched

One of the biggest additions to Good Judgment’s spectrum of services in 2021 was the launch of FutureFirst. FutureFirst is a client-driven subscription platform that gives our user community unlimited access to all Good Judgment’s subscription Superforecasts.

In many ways, FutureFirst is a consolidation of our scientific experiments and several years of successful client engagements. We designed FutureFirst to

    • offer clients one-click access to the collective wisdom of our international team of Superforecasters—to their predictions, rationales, and sources;
    • enable easy monitoring of the Superforecasters’ predictions on a wide range of topics (economy and finance, geopolitics, environment, technology, health, and more); and
    • allow clients to nominate and upvote new questions that matter to their organization so that the topics are crowd-sourced from the community of clients directly.
A sample of books that mention Superforecasters

With the addition of the Class of 2022 Superforecasters, Good Judgment now works with more than 180 professional Superforecasters. They reside on every continent except Antarctica and have been identified through a rigorous process to join the world’s most accurate forecasters.

There are currently some 80 active forecasts on FutureFirst, with new questions being added nearly every week. Taken together, the forecasts on the platform paint the big picture of global risk with accuracy not found anywhere else.

Improving Ways of Eliciting and Aggregating Forecasts

At the same time, we continue to crowdsource other ideas to enhance the value of our service for clients. In response to user feedback and innovations by our data science team, we:

    • now offer “continuous forecasts” so that clients can have a target forecast number as well as probabilities distributed across ranges;
    • provide “rolling forecasts” on a custom basis with predictions that automatically advance each day so that the time horizon is fixed—for instance, the probability of a recession in the next 12 months;
    • will be launching API access shortly for clients to have a data feed directly into their models.

 

Superforecasters in the Media

Superforecasters predict Jerome Powell’s reappointment

From questions about the Tokyo Olympics to the renomination of Jerome Powell to our early forecasts about Covid-19 that were closed and scored in the past year, the year 2021 offered many examples of Good Judgment’s Superforecasters providing early and accurate insights. The European Central Bank and financial firms such as Goldman Sachs and T. Rowe Price all referenced our forecasts in their work. The year also brought both new and returning collaborations with some of the world’s leading media organizations and authors.

    • We worked with The Economist on their “What If” and “The World Ahead 2022” annual publications.
    • The Financial Times featured our forecast on Covid-19 vaccinations on their front page and on their Covid-19 data page.
    • Sky News launched an exciting current affairs challenge for the UK and beyond on our public platform GJ Open.
    • Best-selling authors Tim Harford and Adam Grant also ran popular forecasting challenges.
    • Adam Grant’s Think Again and Daniel Kahneman’s Noise (with coauthors Oliver Sibony and Cass R. Sunstein) published in 2021 discuss Superforecasters’ outstanding track record.
    • Magazines such as Luckbox and Entrepreneur published major articles about Good Judgment and the Superforecasters.

 

Training Better Forecasters

Our workshops continued to attract astute business and government participants who received the best training on boosting in-house forecasting accuracy. Of all the organizations that had a workshop in 2020, more than 90% came back for more in 2021. And they were joined by many more organizations in the public and private sectors throughout the year. Many of these firms now regularly send their interns and new hires through our workshops. Capstone LLC, a global policy analysis firm with headquarters in Washington, DC, London, and Sydney, went a step further: They made our workshops the cornerstone of multi-day mandatory training sessions for all their analysts.

“This led to the adoption of [S]uperforecasting techniques across all of our research and a more rigorous measuring of all our predictions,” Capstone CEO David Barrosse wrote on the company’s blog. “Ultimately the process means better predictions, and more value for clients.”

As many in our company are themselves Superforecasters, we start any forecast about Good Judgment in 2022 by first looking back. The science of Superforecasting has shown that establishing a base rate leads to making more accurate predictions. If the developments in 2021 are a valid indication, next year will bring more exciting projects, fruitful collaborations, and effective ways to bring valuable early insight to our clients.

Books on Making Better Decisions

Books on Making Better Decisions: Good Judgment’s Back-to-School Edition

Since the publication of Tetlock and Gardner’s seminal Superforecasting: The Art and Science of Prediction, many books and articles have been written about the ground-breaking findings of the Good Judgment Project, its corporate successor Good Judgment Inc, and the Superforecasters.

This is not surprising: Decision-makers have a lot to learn from the Superforecasters. Thanks to being actively open-minded and unafraid to rethink their conclusions, the Superforecasters have been able to make accurate predictions where experts often failed. They know how to think in probabilities (or “in bets”), reduce the noise in their judgments, and mitigate cognitive biases such as overconfidence. As Tetlock and Good Judgment Inc have shown, these are skills that can be learned.

Here is a short list of eight notable books that present a wealth of information on ways to evaluate an uncertain future and improve decision-making.

In 2011, IARPA—the research arm of the US intelligence community—launched a massive competition to identify cutting-edge methods to forecast geopolitical events. Four years, 500 questions, and over a million forecasts later, the Good Judgment Project (GJP)—led by Philip Tetlock and Barbara Mellers at the University of Pennsylvania—emerged as the undisputed victor in the tournament. GJP’s forecasts were so accurate that they even outperformed those of intelligence analysts with access to classified data. One of the biggest discoveries of GJP were the Superforecasters: GJP research found compelling evidence that some people are exceptionally skilled at assigning realistic probabilities to possible outcomes—even on topics outside their primary subject-matter training.

In their New York Times bestseller, Superforecasting, our cofounder Philip Tetlock and his colleague Dan Gardner profile several of these talented forecasters, describing the attributes they share, including open-minded thinking, and argue that forecasting is a skill to be cultivated, rather than an inborn aptitude.

Noise, defined as unwanted variability in judgments, can be corrosive to decision-making. Yet, unlike its better-known companion, bias, it often remains undetected—and therefore unmitigated—in decision processes. In addition to research-based insights into better decision-making and remedies to identify and reduce noise as a source of error, Kahneman and his colleagues take a close look at a select group of forecasters—the  Superforecasters—whose judgments are not only less biased but also less noisy than those of most decision-makers. As co-author of Noise Cass Sunstein says, “Superforecasters are less noisy—they don’t show the variability that the rest of us show. They’re very smart; but also, very importantly, they don’t think in terms of ‘yes’ or ‘no’ but in terms of probability.”

Intelligence is often seen as the ability to think and learn, but in a rapidly changing world, there’s another set of cognitive skills that might matter more: the ability to rethink and unlearn. As an organizational psychologist, Adam Grant investigates how we can embrace the joy of being wrong, bring nuance to charged conversations, and build schools, workplaces, and communities of lifelong learners. He also profiles Good Judgment Inc’s Superforecasters Kjirste Morrell and Jean-Pierre Beugoms, who embody the outstanding thought processes suggested in the book. You can read more about Morrell and Beugoms in our interviews here.

David Epstein examines the world’s most successful athletes, artists, musicians, inventors, and forecasters to show that in most fields—especially those that are complex and unpredictable—generalists, not specialists, are primed to excel. In a chapter about the failure of expert predictions, he discusses Phil Tetlock’s research, the GJP, and how “a small group of foxiest forecasters—just bright people with wide-ranging interests and reading habits—destroyed the competition” in the IARPA tournament. Good Judgment Inc’s Superforecasters Scott Eastman and Ellen Cousins, profiled in the book, weigh in on such topics as curiosity, aggregating perspectives, and learning from specialists without being swayed by their often narrow worldviews.

Other books that mention Superforecasting, Good Judgment Inc, or Good Judgment Project

How the Republicans Could Still Hold the White House and Senate

What if there’s no Blue Wave?

Over the week before the election, Good Judgment’s professional Superforecasters engaged in an extensive “pre-mortem” or “what-if” exercise regarding our forecasts for a Blue-Wave election. We asked the Superforecasters to imagine that they could time-travel to a future in which the 2020 election results are final, with the Republicans retaining both the White House and control of the Senate. Then, we asked them to “explain” why the outcomes differed from the most likely outcomes in their pre-election probabilistic forecasts. Thinking through these scenarios now, before the actual outcomes are known, helps to avoid hindsight bias (the tendency to view what actually happened as being more inevitable than it was).

If we had asked the same questions six months earlier, the Superforecasters would have responded differently because there would have been many more uncertainties still in play. In April, we didn’t know whether progressives would fully back the Democratic ticket. We didn’t know that former Vice President Biden would choose Senator Kamala Harris as his running mate. We certainly didn’t know how the COVID pandemic and its economic and social fallout would evolve. And, the possibilities of an “October surprise” were wide open.

Our what-if exercise unsurprisingly focused on the factors that remained most uncertain a week before Election Day. For each election question (Presidency and control of Senate), the factors Superforecasters most frequently cited for the “wrong side of maybe” outcomes fell into three categories, summarized in the charts below.

Here’s how the Superforecasters talked about each of the possible explanations for the Presidency and control of the Senate to remain in Republican hands, despite our high probability estimates that the Democrats would prevail in each case.

    1. We underestimated the likelihood that “close races plus voting mechanics complications [would] lead to judicialization of the election result.” (Explanations generally consistent with this commenter’s view received 33% of the upvotes on the Presidential side and 16% of the upvotes on the Senate side.)

We already know quite a bit about early turnout, thanks to great work by people like Michael McDonald of the University of Florida. But we don’t know how many more people will cast votes. And, critically, we don’t know how many ballots will be disqualified because they arrived past the moving goalpost for eligibility or failed to meet some other requirement. Superforecasters expect the opposing camps to litigate these issues wherever the preliminary vote counts are close enough that disputed ballots could change the outcome. If our less-than-20% scenario for the Republicans to retain the White House materializes, many Superforecasters imagine that “a much-larger-than-expected percentage of mail-in ballots were rejected in the battleground states.”

Similarly, if the Republicans hold the Senate, Superforecasters anticipate that “optimization of Republican control of important election processes in key Senate battleground states: ballot collection processes, court decisions, restrictions on voter eligibility, etc.” will have played an important role. As with the Presidential race, they imagine that “mail-in voting complications [could] lead to undercounting of Democratic votes.” (Like most observers, they expect that Democrats are more likely than Republicans to vote by mail.)

By the way, “judicialization of the election result” does not imply a single Supreme Court decision that determines the outcome of the race. Rather, a host of state and federal court rulings already have affected which votes will count, and Superforecasters expect the courts to become involved in even more such decisions before this election is decided.

    1. We placed too much reliance on “polls that were less accurate than believed, even taking into account the fact that everyone knows they aren’t perfect, leading to people underestimating the chances of the party behind in the polls.” (26% of all upvotes – Presidency; 16% – Senate)

Superforecasters acknowledge that “most of us are influenced by models like 538, The Economist, and others that rely primarily on polls” when forecasting US elections. But we can’t determine exactly how much Good Judgment’s election forecasts rely on a particular poll or polls in general because our Superforecasts combine and weight the probabilities assigned by individual Superforecasters to arrive at a collective estimate of the likelihood of the various outcomes.

Recognizing that the 2016 polls favored a Clinton victory, most Superforecasters have discounted 2020 polls that show extremely high odds of a Biden win. Even so, if President Trump is re-elected and/or the Republicans retain control of the Senate, it is almost definitional that the polls will have been “wrong” in some way.

What polling “errors” do Superforecasters imagine to have occurred in those scenarios? The notion of the “shy Trump voter” was the single most frequently upvoted explanation: “Poll numbers systematically under-counted Republicans, due to either unwillingness to participate in polls or unwillingness to disclose support for Trump.” Other polling-related explanations included “underestimating differences between opinion polls and voter mobilization” and the potential failure of polling-related models such as those of 538 to account for the unique circumstances of holding an election in a pandemic year.

In the case of the Senate, their polling-related explanations tend to see any errors in less stark terms than they envision for the Presidential race. The most upvoted comments in this category included “polls whiffed in the key swing states (a little more than slightly), which brought the race(s) well within the margin of error” and “GOP overperforms the polls, within the margin of error, in the Sunbelt close races (particularly the runoffs in Georgia in January) and Iowa.”

    1. We underestimated the effectiveness of the Republican election strategy. (11% of all upvotes re the Presidential race)

Superforecasters anticipate that President Trump’s re-election, should it occur, would owe more to the effectiveness of the Republican election strategy than they had credited. The most commonly cited strategic “secret weapon” was the Republican “ground game,” with “door-to-door canvassing” proving to be more potent than expected. They also cited the Trump campaign’s “digital advertising” as a factor that they may have underestimated.

    1. We underestimated the extent of split voting (voting for Biden, but for a Republican Senatorial candidate). (29% of all upvotes on the Senate question)

The Superforecasters already project a lower probability for the Democrats to take back the Senate than they do for a Biden victory. But that’s an apples-and-oranges comparison because only 35 of the 100 Senate seats are up for election in 2020, whereas the entire country is at play for the Presidency.

When imagining that the Republicans retain control of the Senate, Superforecasters most commonly anticipate that our forecasts may have underestimated split, or crossover, voting. They see two aspects here: first, voters think more positively about some Republican Senatorial candidates than they do about President Trump; and second, some voters are “disillusioned” with the President but want to see the Republicans hold onto the Senate as a check against a Democratic President and House of Representatives.

The Bottom Line

In this “what-if” exercise, Good Judgment asked the Superforecasters to assume for the sake of argument that the Republicans maintain control of both the White House and the Senate. A key goal of this exercise is to nudge forecasters to rethink the reasoning and evidence supporting their forecasts with an eye to adjusting their probability estimates. Yet, even after several days of internal debate, only a few Superforecasters lowered their estimated odds of a Blue-Wave election. Our aggregate forecasts barely moved.

In other words, a week before the election, Good Judgment’s Superforecasters think our current election forecasts are pretty well calibrated. They don’t see the outcomes as locked in by any means, but they are confident that their current levels of confidence are appropriate.

No single forecast is ever right or wrong unless it is expressed in terms of absolute certainty (0% or 100%). If the true probability that President Trump will be re-elected is only 13% (our forecast as of November 1st), he would win the election 13 out of 100 times if we could re-run history repeatedly. That’s why forecasting accuracy is best judged over large numbers of questions.

We’ve looked at the accuracy of our Superforecasts over hundreds of questions and have yet to find any forecasting method that can beat them consistently. The Superforecasters know what they know – and what they don’t know. When it comes to handicapping the odds for geopolitical and economic events, they’re the best bookies around. Lacking a crystal ball, you’d be wise to give their forecasts serious consideration.