How Distinct Is a “Distinct Possibility”?
Vague Verbiage in Forecasting
“What does a ‘fair chance’ mean?”
It is a question posed to a diverse group of professionals—financial advisers, political analysts, investors, journalists—during one of Good Judgment Inc’s virtual workshops. The participants have joined the session from North America, the EU, and the Middle East. They are about to get intensive hands-on training to become better forecasters. Good Judgment’s Senior Vice President Marc Koehler, a Superforecaster and former diplomat, leads the workshop. He takes the participants back to 1961. The young President John F. Kennedy asks his Joint Chiefs of Staff whether a CIA plan to topple the Castro government in Cuba would be successful. They tell the president the plan has a “fair chance” of success.
The workshop participants are now asked to enter a value between 0 and 100—what do they think is the probability of success of a “fair chance”?
When they compare their numbers, the results are striking. Their answers range from 15% to 75% with the median value of 60%.
The story of the 1961 Bay of Pigs invasion is recounted in Good Judgment co-founder Philip Tetlock’s Superforecasting: The Art and Science of Prediction (co-authored with Dan Gardner). The advisor who wrote the words “fair chance,” the story goes, later said what he had in mind was only a 25% chance of success. But like many of the participants in the Good Judgment workshop some 60 years later, President Kennedy took the phrase to imply a more positive assessment of success. By using vague verbiage instead of precise probabilities, the analysts failed to communicate their true evaluation to the president. The rest is history: The Bay of Pigs plan he approved ended in failure and loss of life.
Vague verbiage is pernicious in multiple ways.
1. Language is open to interpretations. Numbers are not.
According to research published in the Journal of Experimental Psychology, “maybe” ranges from 22% to 89%, meaning radically different things to different people under different circumstances. Survey research by Good Judgment shows the implied ranges for other vague terms, with “distinct possibility” ranging from 21% to 84%. Yet, “distinct possibility” was the phrase used by White House National Security Adviser Jake Sullivan on the eve of the Russian invasion in Ukraine.
Other researchers have found equally dramatic perceptions of probability that people attach to vague terms. In a survey of 1,700 respondents, Andrew Mauboussin and Michael J. Mauboussin found, for instance, that the probability range that most people attribute to an event with a “real possibility” of happening spans about 20% to 80%.
2. Language avoids accountability. Numbers embrace it.
Pundits and media personalities often use such words as “may” and “could” without even attempting to define them because these words give them infinite flexibility to claim credit when something happens (“I told you it could happen”) and to dodge blame when it does not (“I merely said it could happen”).
“I can confidently forecast that the Earth may be attacked by aliens tomorrow,” Tetlock writes. “And if it isn’t? I’m not wrong. Every ‘may’ is accompanied by an asterisk and the words ‘or may not’ are buried in the fine print.”
Those who use numbers, on the other hand, contribute to better decision-making.
“If you give me a precise number,” Koehler explains in the workshop, “I’ll know what you mean, you’ll know what you mean, and then the decision-maker will be able to decide whether or not to proceed with the plan.”
Tetlock agrees. “Vague expectations about indefinite futures are not helpful,” he writes. “Fuzzy thinking can never be proven wrong.”
If we are serious about making informed decisions about the future, we need to stop hiding behind hedge words of dubious value.
3. Language can’t provide feedback to demonstrate a track record. Numbers can.
In some fields, the transition away from vague verbiage is already happening. In sports, coaches use probability to understand the strengths and weaknesses of a particular team or player. In weather forecasting, the standard is to use numbers. We are much better informed by “30% chance of showers” than by “slight chance of showers.” Furthermore, since weather forecasters get ample feedback, they are exceptionally well calibrated: When they say there’s a 30% chance of showers, there will be showers three times out of ten—and no showers the other seven times. They are able to achieve that level of accuracy by using numbers—and we know what they mean by those numbers.
Another well-calibrated group of forecasters are the Superforecasters at Good Judgment Inc, an international team of highly accurate forecasters selected for their track record among hundreds and hundreds of others. When assessing questions about geopolitics or the economy, the Superforecasters use numeric probabilities that they update regularly, much like weather forecasters do. This involves mental discipline, Koehler says. When forecasters are forced to translate terms like “serious possibility” or “fair chance” into numbers, they have to think carefully about how they are thinking, to question their assumptions, and to seek out arguments that can prove them wrong. And their track record is available for all to see. All this leads to better informed and accurate forecasts that decision-makers can rely on.
Good Judgment Inc is the successor to the Good Judgment Project, which won a massive US government-sponsored geopolitical forecasting tournament and generated forecasts that were 30% more accurate than those produced by intelligence community analysts with access to classified information. The Superforecasters are still hard at work providing probabilistic forecasts along with detailed commentary and reporting to clients around the world. For more information on how you can access FutureFirst™, Good Judgment’s exclusive forecast monitoring tool, visit https://goodjudgment.com/services/futurefirst/.