Super Quiet

Super Quiet: Kahneman’s Noise and the Superforecasters

Much is written about the detrimental role of bias in human judgment. Its companion, noise, on the other hand, often goes undetected or underestimated. Noise: A Flaw in Human Judgment, the new book by Nobel laureate Daniel Kahneman and his co-authors, Olivier Sibony and Cass R. Sunstein, exposes how noise—variability in judgments that should be identical—wreaks havoc in many fields, from law to medicine to economic forecasting.

Noise offers research-based insights into better decision-making and suggests remedies to reduce the titular source of error.

No book on making better judgments, of course, particularly better judgments in forecasting, would be complete without the mention of Superforecasters, and certainly not one co-authored by such a luminary of human judgment as Kahneman.

Superforecasters (discussed in detail in chapters 18 and 21 of the book) are a select group who “reliably out-predict their less-than-super peers” because they are able to consistently overcome both bias and noise. One could say, the Superforecasters are not only actively open-minded—they are also super quiet in their forecasts.

“What makes the Superforecasters so good?” the authors ask. For one, they are “unusually intelligent” and “unusually good with numbers.” But that’s not it.

“Their real advantage,” according to Kahneman, Sibony, and Sunstein, “is not their talent at math; it is their ease in thinking analytically and probabilistically.”

Noise identifies other qualities that set the Superforecasters apart from regular forecasters:

    • Willingness and ability to structure and disaggregate problems;
    • Taking the outside view;
    • Systematically looking for base rates.

In short, it’s not just their natural intelligence. It’s how they use it.

Not everyone is a good forecaster, of course, and while crowds are usually better than individuals, not every crowd is equally wise.

“It is obvious that in any task that requires judgment, some people will perform better than others will. Even a wisdom-of-crowds aggregate of judgments is likely to be better if the crowd is composed of more able people,” the authors state.

Good Judgment’s Superforecasters are unique, with an unbeaten track record, among a myriad of individual forecasters and forecasting firms. Kahneman, Sibony, and Sunstein are not surprised:

“Judgments are both less noisy and less biased when those who make them are well trained, are more intelligent, and have the right cognitive style.”

Good Judgment’s Training Reduces Noise

“Well trained” is a key word here. When the Superforecasters were discovered in “some of the most innovative work on the quality of forecasting”—the Good Judgment Project (GJP, 2011-2016)—they were the top 2% among thousands of volunteers. That doesn’t mean, however, that the rest of the world is doomed to drown in noisy decision-making. It is not an either-you-have-it-or-you-don’t skill.

According to Kahneman, Sibony, and Sunstein, “people can be trained to be superforecasters or at least to perform more like them.”

Good Judgment Inc’s online training and workshops do just that. Based on the concepts taught in the GJP training, these workshops are designed to reduce psychological biases—which, in turn, results in less noise.

Kahneman and colleagues explain how this works, citing the BIN (bias, information, and noise) model for forecasting developed by Ville Satopää, Marat Salikhov, and Good Judgment’s co-founders Phil Tetlock and Barb Mellers:

“When they affect different individuals on different judgments in different ways, psychological biases produce noise. … As a result, training forecasters to fight their psychological biases works—by reducing noise.”

Good Judgment’s training also focuses on teaming, another effective method scientifically demonstrated to reduce noise.

According to Kahneman, Sibony, and Sunstein, both private and public organizations—and the society at large—stand to gain much from reducing noise. “Should they do so, organizations could reduce widespread unfairness—and reduce costs in many areas,” the authors write. And the Superforecasters are an example for decision-makers to emulate in these efforts.

Learn Superforecasting from the pros at our Superforecasting Workshops
See upcoming workshops here

Question Clustering

Question Clustering: Ensuring relevance and rigor in business and geopolitical forecasting

Forecasting is an essential part of business. Companies use historical data and economic trends to make informed estimates of future sales, profits, and losses. Amazon and Google rely on predictive algorithms to highlight specific products or search results for their customers. Shopkeepers arrange their display windows based on their predictions of demand. Some of the forecasting is narrow and company-specific. The more challenging part has to do with broader questions, such as the overall market outlook.

Consider this question: What will market conditions be like after the pandemic?

This question is of great relevance to any business decision-maker, but it’s so broad that it could be approached from many different angles, using multiple definitions. “Market conditions,” for instance, involves multiple factors, from industry-specific trends to interest rates, from consumer spending to supply chains. How do we intend to measure this? What time frame are we looking at?

Combining related forecasting questions into clusters can reveal bigger-picture insights for businesses and industries

Much more tractable is a narrower question—”On 31 January 2022, what will be the average US tariff rate on imports from China?” But in checking all the boxes to craft a rigorous question, narrow topics may lose their relevance to decision-makers trying to determine the prospects for their business down the line. This is the rigor-relevance trade-off.

To tackle the relevant strategic questions that decision-makers ask with the rigor that accurate forecasting requires, Good Judgment crafts sets of questions about discrete events that, in combination, shed light on a broader topic. Good Judgment calls this technique question clusters. Good clusters will examine the strategic question from multiple perspectives—political, economic, financial, security, and informational. These days, a public health perspective is also usually worthwhile. Aggregating the probabilities that the Superforecasters assign to such specific questions generates a comprehensive forecast about the strategic question, as well as early warnings about the deeper geopolitical or business trends underway.

The Good Judgment team developed this method in a research study sponsored and validated by the US Intelligence Advanced Research Projects Activity from 2011-2015, where the Superforecasters outperformed the collective forecasts of intelligence analysts in the US government by 30%. Since then, the Superforecasters have refined and expanded their approach to evaluate emerging consumer trends, inform product development, and understand the factors driving commodity markets. A similar framework was detailed in a recent Foreign Affairs cover article by Professor Philip Tetlock and Dr Peter Scoblic.

As an example, a question cluster on a critical topic that clients have asked Good Judgment to develop includes political, military, social, and economic forecasts on emerging trends regarding Taiwan:

    • Will Taiwan accuse the People’s Republic of China (PRC) of flying a military aircraft over the territory of the main island of Taiwan without permission before 31 December 2021?
    • Will the PRC or Taiwan accuse the others’ military or civilian forces of firing upon its own military or civilian forces before 1 January 2022?
    • Before 1 January 2022, will US legislation explicitly authorizing the president to use the armed forces to defend Taiwan from a military attack from the PRC become law?
    • Will Taiwan (Chinese Taipei) send any athletes to compete in the 2022 Winter Olympics in Beijing?
    • Will the Council of the EU adopt a decision authorizing the Commission to open negotiations with Taiwan on an investment agreement?
    • Will the World Health Organization reinstate Taiwan’s observer status before 1 January 2022?

As another example, in forecasting the technological landscape, the Superforecasters estimate the emerging trends in electric car sales, hydrogen-fueled vehicles, the growth of Starlink services, and social media regulation, among others.

Independently, these forecasts are crucial for some companies and investors and informative for the discerning public. Taken together, they point to trends that are shaping the future of the geopolitical and business world.

The same technique can be applied to Amazon. The traditional approach of security analysts is to build a fixed model that predicts valuation based on dividends, cash flow, and other objective metrics. A question cluster approach can uncover critical subjective variables and quantify them, such as the risk of a regulatory crackdown or shifts in labor relations, to make the analyst’s model more robust.

“In business, good forecasting can be the difference between prosperity and bankruptcy,” says co-founder of Good Judgment, Professor Philip Tetlock. Successful businesses rely on forecasting to make better decisions. Using clusters of interrelated questions is one way to ensure those forecasts are both rigorous and relevant.

* This article originally appeared in Luckbox Magazine and is shared with their permission.

The Cost of Overconfidence

The Cost of Overconfidence

With SPACs all the rage, it’s important not to be too carried away by the rhetoric. Overconfidence can be expensive. This is true in geopolitics, public health, or the stock market. From the 1961 Bay of Pigs debacle to the slow response to the COVID-19 crisis, to millions of dollars lost speculating in the markets, history is filled with costly examples. And yet, bold statements continue to be overvalued in our culture. Time and time again, the media turn to pundits who speak with conviction despite their spotty track records when it comes to offering real foresight.

Think back to a year ago, when the airwaves were filled with experts and politicians confidently asserting that COVID-19 would swiftly pass. The US president claimed in February 2020 that the coronavirus was under control in the US and would disappear “like a miracle.” It took another month for the administration to acknowledge that the unfolding pandemic was serious.

“A confident yes or no is satisfying in a way that maybe never is, a fact that helps to explain why the media so often turn to hedgehogs who are sure they know what is coming no matter how bad their forecasting records may be,” writes Good Judgment’s co-founder Philip E. Tetlock in his book with Dan Gardner, Superforecasting: The Art and Science of Prediction.

Dr. Tetlock refers to a distinction between “foxes” and “hedgehogs,” a metaphor borrowed from ancient Greek poetry and popularized by the philosopher Isaiah Berlin: “The fox knows many things but the hedgehog knows one big thing.”

Hedgehogs tend to be more confident—and more likely to get media attention—but, as research has found in multiple experiments, they also tend to be worse forecasters. Foxes, in contrast, tend to think in terms of “however” and “on the other hand,” switch mental gears, and talk about probabilities rather than certainties.

For instance, last year Good Judgment’s Superforecasters were estimating with a 67% probability that worldwide cases of COVID-19 would exceed 53 million within a year and a 99% probability that deaths in the US alone would be more than 200,000—a figure many found exorbitant at the time. Superforecasters proved right in both cases. Their judgment was, and continues to be, well-calibrated. In other words, they know what they know and know what they don’t know, and they make their forecasts accordingly.

To avoid overconfidence, Superforecasters consider worst-case scenarios. Instead of relying on hunches and past success, they actively seek out evidence that those hunches may be wrong. They embrace new information and are not afraid to change their mind in light of new evidence.

Alas, pundits and most of the media have yet to join the foxes. “We live in a world that rewards those who speak with conviction—even when that is misplaced—and gives very little airtime to those who acknowledge doubt,” writes Financial Times columnist Jemima Kelly.

The sense of security that comes with confident judgments is comforting. But it is an illusion.

The cost of that illusion can be steep: from the inadequate early response to the pandemic to the investors trading to their detriment because they are overconfident about their ability to predict stock market returns.

Superforecasters know a way to avoid that cost: In a world that overvalues hedgehogs, pay more attention to your inner fox.

* This article originally appeared in Luckbox Magazine and is shared with their permission.