Books on Making Better Decisions

Books on Making Better Decisions: Good Judgment’s Back-to-School Edition

Since the publication of Tetlock and Gardner’s seminal Superforecasting: The Art and Science of Prediction, many books and articles have been written about the ground-breaking findings of the Good Judgment Project, its corporate successor Good Judgment Inc, and the Superforecasters.

This is not surprising: Decision-makers have a lot to learn from the Superforecasters. Thanks to being actively open-minded and unafraid to rethink their conclusions, the Superforecasters have been able to make accurate predictions where experts often failed. They know how to think in probabilities (or “in bets”), reduce the noise in their judgments, and mitigate cognitive biases such as overconfidence. As Tetlock and Good Judgment Inc have shown, these are skills that can be learned.

Here is a short list of eight notable books that present a wealth of information on ways to evaluate an uncertain future and improve decision-making.

In 2011, IARPA—the research arm of the US intelligence community—launched a massive competition to identify cutting-edge methods to forecast geopolitical events. Four years, 500 questions, and over a million forecasts later, the Good Judgment Project (GJP)—led by Philip Tetlock and Barbara Mellers at the University of Pennsylvania—emerged as the undisputed victor in the tournament. GJP’s forecasts were so accurate that they even outperformed those of intelligence analysts with access to classified data. One of the biggest discoveries of GJP were the Superforecasters: GJP research found compelling evidence that some people are exceptionally skilled at assigning realistic probabilities to possible outcomes—even on topics outside their primary subject-matter training.

In their New York Times bestseller, Superforecasting, our cofounder Philip Tetlock and his colleague Dan Gardner profile several of these talented forecasters, describing the attributes they share, including open-minded thinking, and argue that forecasting is a skill to be cultivated, rather than an inborn aptitude.

Noise, defined as unwanted variability in judgments, can be corrosive to decision-making. Yet, unlike its better-known companion, bias, it often remains undetected—and therefore unmitigated—in decision processes. In addition to research-based insights into better decision-making and remedies to identify and reduce noise as a source of error, Kahneman and his colleagues take a close look at a select group of forecasters—the  Superforecasters—whose judgments are not only less biased but also less noisy than those of most decision-makers. As co-author of Noise Cass Sunstein says, “Superforecasters are less noisy—they don’t show the variability that the rest of us show. They’re very smart; but also, very importantly, they don’t think in terms of ‘yes’ or ‘no’ but in terms of probability.”

Intelligence is often seen as the ability to think and learn, but in a rapidly changing world, there’s another set of cognitive skills that might matter more: the ability to rethink and unlearn. As an organizational psychologist, Adam Grant investigates how we can embrace the joy of being wrong, bring nuance to charged conversations, and build schools, workplaces, and communities of lifelong learners. He also profiles Good Judgment Inc’s Superforecasters Kjirste Morrell and Jean-Pierre Beugoms, who embody the outstanding thought processes suggested in the book. You can read more about Morrell and Beugoms in our interviews here.

David Epstein examines the world’s most successful athletes, artists, musicians, inventors, and forecasters to show that in most fields—especially those that are complex and unpredictable—generalists, not specialists, are primed to excel. In a chapter about the failure of expert predictions, he discusses Phil Tetlock’s research, the GJP, and how “a small group of foxiest forecasters—just bright people with wide-ranging interests and reading habits—destroyed the competition” in the IARPA tournament. Good Judgment Inc’s Superforecasters Scott Eastman and Ellen Cousins, profiled in the book, weigh in on such topics as curiosity, aggregating perspectives, and learning from specialists without being swayed by their often narrow worldviews.

Other books that mention Superforecasting, Good Judgment Inc, or Good Judgment Project

How the Republicans Could Still Hold the White House and Senate

What if there’s no Blue Wave?

Over the week before the election, Good Judgment’s professional Superforecasters engaged in an extensive “pre-mortem” or “what-if” exercise regarding our forecasts for a Blue-Wave election. We asked the Superforecasters to imagine that they could time-travel to a future in which the 2020 election results are final, with the Republicans retaining both the White House and control of the Senate. Then, we asked them to “explain” why the outcomes differed from the most likely outcomes in their pre-election probabilistic forecasts. Thinking through these scenarios now, before the actual outcomes are known, helps to avoid hindsight bias (the tendency to view what actually happened as being more inevitable than it was).

If we had asked the same questions six months earlier, the Superforecasters would have responded differently because there would have been many more uncertainties still in play. In April, we didn’t know whether progressives would fully back the Democratic ticket. We didn’t know that former Vice President Biden would choose Senator Kamala Harris as his running mate. We certainly didn’t know how the COVID pandemic and its economic and social fallout would evolve. And, the possibilities of an “October surprise” were wide open.

Our what-if exercise unsurprisingly focused on the factors that remained most uncertain a week before Election Day. For each election question (Presidency and control of Senate), the factors Superforecasters most frequently cited for the “wrong side of maybe” outcomes fell into three categories, summarized in the charts below.

Here’s how the Superforecasters talked about each of the possible explanations for the Presidency and control of the Senate to remain in Republican hands, despite our high probability estimates that the Democrats would prevail in each case.

    1. We underestimated the likelihood that “close races plus voting mechanics complications [would] lead to judicialization of the election result.” (Explanations generally consistent with this commenter’s view received 33% of the upvotes on the Presidential side and 16% of the upvotes on the Senate side.)

We already know quite a bit about early turnout, thanks to great work by people like Michael McDonald of the University of Florida. But we don’t know how many more people will cast votes. And, critically, we don’t know how many ballots will be disqualified because they arrived past the moving goalpost for eligibility or failed to meet some other requirement. Superforecasters expect the opposing camps to litigate these issues wherever the preliminary vote counts are close enough that disputed ballots could change the outcome. If our less-than-20% scenario for the Republicans to retain the White House materializes, many Superforecasters imagine that “a much-larger-than-expected percentage of mail-in ballots were rejected in the battleground states.”

Similarly, if the Republicans hold the Senate, Superforecasters anticipate that “optimization of Republican control of important election processes in key Senate battleground states: ballot collection processes, court decisions, restrictions on voter eligibility, etc.” will have played an important role. As with the Presidential race, they imagine that “mail-in voting complications [could] lead to undercounting of Democratic votes.” (Like most observers, they expect that Democrats are more likely than Republicans to vote by mail.)

By the way, “judicialization of the election result” does not imply a single Supreme Court decision that determines the outcome of the race. Rather, a host of state and federal court rulings already have affected which votes will count, and Superforecasters expect the courts to become involved in even more such decisions before this election is decided.

    1. We placed too much reliance on “polls that were less accurate than believed, even taking into account the fact that everyone knows they aren’t perfect, leading to people underestimating the chances of the party behind in the polls.” (26% of all upvotes – Presidency; 16% – Senate)

Superforecasters acknowledge that “most of us are influenced by models like 538, The Economist, and others that rely primarily on polls” when forecasting US elections. But we can’t determine exactly how much Good Judgment’s election forecasts rely on a particular poll or polls in general because our Superforecasts combine and weight the probabilities assigned by individual Superforecasters to arrive at a collective estimate of the likelihood of the various outcomes.

Recognizing that the 2016 polls favored a Clinton victory, most Superforecasters have discounted 2020 polls that show extremely high odds of a Biden win. Even so, if President Trump is re-elected and/or the Republicans retain control of the Senate, it is almost definitional that the polls will have been “wrong” in some way.

What polling “errors” do Superforecasters imagine to have occurred in those scenarios? The notion of the “shy Trump voter” was the single most frequently upvoted explanation: “Poll numbers systematically under-counted Republicans, due to either unwillingness to participate in polls or unwillingness to disclose support for Trump.” Other polling-related explanations included “underestimating differences between opinion polls and voter mobilization” and the potential failure of polling-related models such as those of 538 to account for the unique circumstances of holding an election in a pandemic year.

In the case of the Senate, their polling-related explanations tend to see any errors in less stark terms than they envision for the Presidential race. The most upvoted comments in this category included “polls whiffed in the key swing states (a little more than slightly), which brought the race(s) well within the margin of error” and “GOP overperforms the polls, within the margin of error, in the Sunbelt close races (particularly the runoffs in Georgia in January) and Iowa.”

    1. We underestimated the effectiveness of the Republican election strategy. (11% of all upvotes re the Presidential race)

Superforecasters anticipate that President Trump’s re-election, should it occur, would owe more to the effectiveness of the Republican election strategy than they had credited. The most commonly cited strategic “secret weapon” was the Republican “ground game,” with “door-to-door canvassing” proving to be more potent than expected. They also cited the Trump campaign’s “digital advertising” as a factor that they may have underestimated.

    1. We underestimated the extent of split voting (voting for Biden, but for a Republican Senatorial candidate). (29% of all upvotes on the Senate question)

The Superforecasters already project a lower probability for the Democrats to take back the Senate than they do for a Biden victory. But that’s an apples-and-oranges comparison because only 35 of the 100 Senate seats are up for election in 2020, whereas the entire country is at play for the Presidency.

When imagining that the Republicans retain control of the Senate, Superforecasters most commonly anticipate that our forecasts may have underestimated split, or crossover, voting. They see two aspects here: first, voters think more positively about some Republican Senatorial candidates than they do about President Trump; and second, some voters are “disillusioned” with the President but want to see the Republicans hold onto the Senate as a check against a Democratic President and House of Representatives.

The Bottom Line

In this “what-if” exercise, Good Judgment asked the Superforecasters to assume for the sake of argument that the Republicans maintain control of both the White House and the Senate. A key goal of this exercise is to nudge forecasters to rethink the reasoning and evidence supporting their forecasts with an eye to adjusting their probability estimates. Yet, even after several days of internal debate, only a few Superforecasters lowered their estimated odds of a Blue-Wave election. Our aggregate forecasts barely moved.

In other words, a week before the election, Good Judgment’s Superforecasters think our current election forecasts are pretty well calibrated. They don’t see the outcomes as locked in by any means, but they are confident that their current levels of confidence are appropriate.

No single forecast is ever right or wrong unless it is expressed in terms of absolute certainty (0% or 100%). If the true probability that President Trump will be re-elected is only 13% (our forecast as of November 1st), he would win the election 13 out of 100 times if we could re-run history repeatedly. That’s why forecasting accuracy is best judged over large numbers of questions.

We’ve looked at the accuracy of our Superforecasts over hundreds of questions and have yet to find any forecasting method that can beat them consistently. The Superforecasters know what they know – and what they don’t know. When it comes to handicapping the odds for geopolitical and economic events, they’re the best bookies around. Lacking a crystal ball, you’d be wise to give their forecasts serious consideration.

How to Become a Superforecaster®

How to Become a Superforecaster®

A BBC listener asked the team behind their “CrowdScience” radio show and podcast whether she might, in fact, be a Superforecaster. She’s not the first to wonder if she would qualify – and not the first to be curious about how Good Judgment spots and recruits superior forecasting talent.

For all curious souls out there, here’s the inside scoop.

Superforecaster Origins: the Good Judgment Project

Superforecasters were a surprise discovery of the Good Judgment Project (GJP), the research-and-development project that preceded Good Judgment Inc. GJP was the winner of the massive US-government-sponsored four-year geopolitical forecasting tournament known as ACE.

As our co-founder Phil Tetlock explains in his bestseller Superforecasting: The Art and Science of Prediction, we set up GJP as a controlled experiment with random assignment of participants. Results from the first year of the tournament established that teaming and training could boost forecasting accuracy. Therefore, we selected GJP superforecasters from the top 2% of forecasters in each experimental condition to account for the advantages of having been on a team and/or received training. To minimize the chance that outstanding accuracy resulted from luck rather than skill, we limited eligibility for GJP superforecaster status to those forecasters who participated in at least 50 forecasting questions during a tournament “season.”

Professional Superforecaster Selection

When the ACE tournament ended in mid-2015, Good Judgment Inc invited the forecasters with the best-established track records to become the core of our professional Superforecaster contingent. Most of Good Judgment Inc’s 150+ professional Superforecasters qualified through their relative accuracy during GJP. They had not only earned GJP superforecaster status over one tournament season, but also confirmed their accuracy over 50+ questions in a second year of forecasting.

Good Judgment has continued to identify and recruit new professional Superforecasters since the ACE tournament ended via our public forecasting platform, Good Judgment Open. Many of these new recruits never had a chance to participate in ACE − and they have performed every bit as well as the “original” GJP superforecasters.

Do You Have What It Takes?

Each autumn, Good Judgment will identify and recruit potential Superforecasters from the ranks of GJ Open forecasters. If you think you have what it takes to be a Superforecaster, put that belief to an objective test: forecast on at least 100 GJ Open questions (cumulatively, not necessarily in one year).

Those who consistently outperform the crowd are automatically eligible for our annual pro Super selection process and potentially a trial engagement. Those who successfully complete a three-month probation period will become full-fledged Superforecasters.

So, there you have it. Becoming a professional Superforecaster is the ultimate meritocracy. We don’t care where you live (as long as you have reliable Internet access!). When evaluating your qualifications for Superforecaster status, we ignore your gender, age, race, religion, and even education. (We are, however, keen to have an increasingly diverse pool of professional Superforecasters and encourage people of all backgrounds to test their skills on GJ Open!) We simply want to find the world’s most accurate forecasters – and to nurture their talents in a collaborative environment with other highly skilled professionals.

… If You’re Still Curious

While you’re pondering whether you have what it takes to be a Superforecaster, check out the BBC CrowdScience podcast episode that answers their listener’s question.

And, as always, you can visit our Superforecaster Analytics page to learn more about how Good Judgment’s professional Superforecasters provide early insights and well-calibrated probability estimates about key risks and opportunities to help governments, corporate clients, and NGOs make better decisions.