Superforecaster Ryan Adler turns a live CNBC disagreement about Tesla shares into a quick guide on clarity. Good forecasting starts with shared definitions.
On Monday morning (4 August 2025), I was pounding away on my keyboard with CNBC playing in the background. Living in the Mountain time zone, morning meant the Halftime Report, hosted by Scott “The Judge” Wapner. I was loosely listening in when it became clear that Wapner and “Investment Committee” member Joe Terranova were having a disagreement over whether Tesla shares were up or down over the past month. The exchange was cordial but awkward, as Wapner insisted that Tesla shares were down in the past month based on where the stock was trading that morning, but Terranova was very confident that it was up in the past month. They eventually went to commercial and came back having discovered the source of discrepancy. The problem wasn’t that one was right and the other wrong. The problem was that they were each defining “month” differently.
A month before 4 August 2025 would have been 4 July 2025, a market holiday. The chart CNBC showed related back to the closing price of Tesla on 3 July (about $315). Terranova, on the other hand, was using the opening price as of the opening bell on 7 July 2025, four weeks previous, when the price was a bit under $300. The two talked past each other for a bit until the reason for the difference was identified.
What does this have to do with forecasting? Everything!
Among the many lessons that came out of the Good Judgment Project, it was clear that the fight against ambiguity is essential and never-ending. While others may give this fight a lower priority, it is front-and-center on our minds at Good Judgment with every question drafted and reviewed.
If a term or clause could be interpreted reasonably in different ways, we define that term and include examples as needed. And even if someone interprets something in an arguably unreasonable way, such as asserting that the death of a country’s president doesn’t mean that the person stops being that country’s president (it’s happened repeatedly, for some reason), we clarify.
We aren’t perfect, and the world sometimes creates situations that weren’t on anyone’s radar when a germane question was launched beforehand. That said, we know that everybody must be contemplating the same elements of an event they are asked to forecast. Leaning on Potter Stewart’s concurrence in Jacobellis v. Ohio, where he said, “I know it when I see it,” may work when deciding that a movie is not obscene, but it is no way to set a threshold for a forecasting question. Otherwise, we would invite static from the crowd instead of signal.
Bottom line: The CNBC confusion shows how ambiguity kills forecasts. Define upfront what counts, when it counts, and who decides, and leave as little as possible to interpretation. Good forecasting starts with good question writing.
Do you have what it takes to be a Superforecaster? Find out on GJ Open!
* Ryan Adler is a Superforecaster, GJ managing director, and leader of Good Judgment’s question team
Schedule a consultation to learn how our FutureFirst monitoring tool, custom Superforecasts, and training services can help your organization make better decisions.