What’s a month?

What’s a month?

Why question wording must be exact in forecasting

Superforecaster Ryan Adler turns a live CNBC disagreement about Tesla shares into a quick guide on clarity. Good forecasting starts with shared definitions.

On Monday morning (4 August 2025), I was pounding away on my keyboard with CNBC playing in the background. Living in the Mountain time zone, morning meant the Halftime Report, hosted by Scott “The Judge” Wapner. I was loosely listening in when it became clear that Wapner and “Investment Committee” member Joe Terranova were having a disagreement over whether Tesla shares were up or down over the past month. The exchange was cordial but awkward, as Wapner insisted that Tesla shares were down in the past month based on where the stock was trading that morning, but Terranova was very confident that it was up in the past month. They eventually went to commercial and came back having discovered the source of discrepancy. The problem wasn’t that one was right and the other wrong. The problem was that they were each defining “month” differently.

A month before 4 August 2025 would have been 4 July 2025, a market holiday. The chart CNBC showed related back to the closing price of Tesla on 3 July (about $315). Terranova, on the other hand, was using the opening price as of the opening bell on 7 July 2025, four weeks previous, when the price was a bit under $300. The two talked past each other for a bit until the reason for the difference was identified.

Ambiguity Kills Forecasts

What does this have to do with forecasting? Everything!

Among the many lessons that came out of the Good Judgment Project, it was clear that the fight against ambiguity is essential and never-ending. While others may give this fight a lower priority, it is front-and-center on our minds at Good Judgment with every question drafted and reviewed.

If a term or clause could be interpreted reasonably in different ways, we define that term and include examples as needed. And even if someone interprets something in an arguably unreasonable way, such as asserting that the death of a country’s president doesn’t mean that the person stops being that country’s president (it’s happened repeatedly, for some reason), we clarify.

We aren’t perfect, and the world sometimes creates situations that weren’t on anyone’s radar when a germane question was launched beforehand. That said, we know that everybody must be contemplating the same elements of an event they are asked to forecast. Leaning on Potter Stewart’s concurrence in Jacobellis v. Ohio, where he said, “I know it when I see it,” may work when deciding that a movie is not obscene, but it is no way to set a threshold for a forecasting question. Otherwise, we would invite static from the crowd instead of signal.

Bottom line: The CNBC confusion shows how ambiguity kills forecasts. Define upfront what counts, when it counts, and who decides, and leave as little as possible to interpretation. Good forecasting starts with good question writing.

Do you have what it takes to be a Superforecaster? Find out on GJ Open!

* Ryan Adler is a Superforecaster, GJ managing director, and leader of Good Judgment’s question team

When AI Becomes a False Prophet: A Cautionary Tale for Forecasters

When AI Becomes a False Prophet: A Cautionary Tale for Forecasters

With a nod to Taylor Swift and Travis Kelce, Superforecaster Ryan Adler discusses the gospel according to AI and why forecasters should always verify their sources.

Google’s AI Overview references an AI-generated video to support a false claim.

The promises of artificial intelligence have set up camp in media headlines over the past few years. ChatGPT has become a household name, billions are being spent just to power the equipment to run these programs and models, and the cutting-edge technology is front and center in ongoing tensions between the US and China. It hasn’t left any aspect of human activity untouched, including forecasting.

To be sure, the impacts already felt cannot be understated. We are looking at the front end in what I’m confident will be a seismic shift in society, with large swaths of labor markets around the globe being shaken to their core. That said, we aren’t there yet.

Here’s a recent example of how AI took itself out at the knees regarding a recent forecasting question on Good Judgment Open. In late April 2025, the time came to close a question regarding potential nuptials between Kansas City Chiefs star Travis Kelce and pop superstar Taylor Swift: “Before 19 April 2025, will Travis Kelce and Taylor Swift announce or acknowledge that they are engaged to be married?” (It’s not my favorite subject matter, but we try to maintain a diverse pool of questions.)

As a moderately rabid Chiefs fan myself, I was confident the answer was no, because that would have made headlines across media outlets. However, a key part of the job of running a forecasting platform is being in the habit of double and triple checking. So, I checked with Google. I entered “Are Travis Kelce and…” into the search field, which immediately autofilled to “are travis and taylor engaged?” (The first-name thing with pop culture stars annoys me to no end, but I digress.) To my surprise, Google’s AI preview popped up immediately.

“Yes, according to reports, Travis Kelce and Taylor Swift are engaged.”

“Trust, but verify”

Skeptical, I looked at what the experimental generative AI response was using as a reference to return such a statement. That’s when things got fun.

The first link of the cited material was a YouTube video. Keep in mind that Google, the search engine I used to start my research, owns YouTube. The account that posted the video? DangerousAI. That alone raises more red flags than a May Day parade in Moscow circa 1974. The brief video, dated 24 February 2025, purported to show Travis Kelce announcing that Swift and he “got engaged last week.” However, as the video progressed, the absurdity of Kelce’s putative announcement became perfectly clear.

To sum up, Google’s AI system linked to search was fooled by an AI product posted on another Google platform to give a patently false response.

I don’t highlight this incident as a criticism of Google. However, it should serve as a warning. I’ve seen some GJ Open forecasters take AI responses as gospel. I’m here to tell you that in matters of facts vs fiction, AI is very capable of being a false prophet. This is not to say that AI isn’t an incredibly valuable tool. It certainly is! We are finding more and more uses for it at Good Judgment, but we put it through its paces long before we deem it reliable for a particular role. As the Russian proverb instructs, “Trust, but verify.” (No, President Reagan didn’t say it first.) When it comes to AI and everything else you see online, my suggestion is that you just verify.

Do you have what it takes to be a Superforecaster? Find out on GJ Open!

* Ryan Adler is a Superforecaster, GJ managing director, and leader of Good Judgment’s question team

Informed Practice and Superforecasting: Taking Your Forecasts to the Next Level

Informed Practice and Superforecasting: Taking Your Forecasts to the Next Level

“Not all practice improves skill. It needs to be informed practice.”
– Phil Tetlock & Dan Gardner in Superforecasting

In any area of decision-making where uncertainty looms large, accuracy is the gold standard. However, many decision makers often find themselves in a frustrating cycle—sometimes they make the right call, but other times they miss the mark entirely. Inconsistency can be costly. So, what separates those who occasionally succeed from those who reliably deliver top-notch forecasts? The answer lies in informed practice—one of the concepts at the heart of Superforecasting.

What Is Informed Practice?

Informed practice is not just repetition. It’s a deliberate and thoughtful process of learning from each forecast, refining techniques, and continuously updating one’s beliefs based on new information. It’s about approaching forecasting with a Superforecaster’s mindset—an outlook geared toward improvement, with a consistent effort to mitigate one’s cognitive biases.

What Can Forecasters Learn from Superforecasters?

Superforecasters, known for their uncanny forecasting accuracy, exemplify informed practice. They don’t pull numbers out of a hat or look into a crystal ball for answers. For every question they face, they engage in a rigorous process of analysis, reflection, and adjustment. Here’s how informed practice gives them the edge:

1. Learning from Feedback: Superforecasters thrive on feedback. They meticulously track their forecasts, comparing them against the outcomes to identify where they went right and where they missed the mark. This feedback loop is crucial. It allows them to recalibrate their approach and avoid making the same mistakes twice. Over time, this leads to more refined and accurate forecasts.

2. Understanding Probability: A key aspect of informed practice is the understanding and effective use of probability. Superforecasters don’t think in black-and-white, yes-or-no terms. They consider a range of possible outcomes and assign probabilities to each. They also update these probabilities as new information becomes available, a process known as Bayesian reasoning. This probabilistic thinking helps them navigate uncertainty with greater precision.

3. Continuous Learning: The world is constantly changing, and so too are the variables that influence forecasts. Superforecasters are voracious learners, continuously updating their knowledge base. They stay informed about the latest developments in multiple areas, thus grounding their forecasts in the most current data and insights.

4. Mitigating Cognitive Biases: Cognitive biases can cloud judgment and lead to poor forecasts. Superforecasters are keenly aware of these biases and actively work to mitigate their impact. Through informed practice, they develop strategies to counteract such biases as overconfidence, anchoring, confirmation bias, and more, to make well-calibrated forecasts.

What Is the Role of Collaboration in This?

Informed practice is not a solitary endeavor. Collaboration with other forecasters is a powerful tool for improving accuracy and keeping track. By engaging in discussions, comparing notes, and challenging each other’s assumptions, forecasters can gain new perspectives and insights. Good Judgment’s Superforecasters work in teams, leveraging the collective intelligence of the group to arrive at superior forecasts.

What Practical Steps Can I Take?

1. Keep Track: Keep a record of your forecasts and compare them with the outcomes. Analyze your hits and misses to identify patterns and areas for improvement.

2. Seek Feedback: Seek out feedback from peers or through forecasting platforms such as GJ Open, which provides performance metrics. Use this feedback to refine your approach.

3. Diversify Your Sources of Information: Regularly update your knowledge on the topics you forecast and seek out diverse sources. This includes staying current with news, research, and expert opinions, including those you disagree with.

4. Practice Probabilistic Thinking: Assign probabilities to your forecasts and be willing to adjust them as new information emerges. This helps you avoid the trap of binary thinking.

5. Challenge Your Assumptions: Regularly question your assumptions and be open to changing your mind. This flexibility is crucial in a rapidly changing world.

6. Get a Head Start with GJ Superforecasting Workshops: Consider enrolling in a Superforecasting workshop. Good Judgment’s workshops, led by Superforecasters and GJ data scientists, leverage our years of experience in the field of elite forecasting as well as new developments in the art and science of decision-making to provide you with structured guidance on improving your forecasting skills. Our practical exercises will boost your informed practice, offering you lifelong benefits.

Informed practice is the cornerstone of good forecasting and one of the secrets behind the success of Superforecasters. By diligently applying the above principles, you can enhance your forecasting skills and make better-informed decisions. See the workshops we offer to help you and your team take your forecasting success to the next level.