Superforecasting® the Fed: A Look Back over the Last 6 Months

Superforecasting® the Fed: A Look Back over the Last 6 Months

 

The Federal Reserve’s target range for the federal funds rate is the single most important driver in financial markets. Anticipating inflection points in the Fed’s policy has immense value, and Good Judgment’s Superforecasters have been beating the futures markets this year, signaling the Fed would continue to hike until the June pause while markets and experts alike flipflopped on their calls.

  • Ahead of the Fed’s March meeting, when Silicon Valley Bank went under, the futures markets priced out a hike and began flirting with a possibility of a cut as early as summer. Leading market observers like Goldman Sachs said the Fed would pause, and Nomura said they would start to cut at that meeting.

  • Then the futures markets priced in a pause for the May meeting. Experts like Pimco’s former chief economist Paul McCulley also prematurely predicted that the Fed would go on hold. As the date of the meeting approached, the futures markets—as well as most market participants—came to share the Superforecasters’ view that another hike was in the cards, but lagged behind the Superforecasters by nearly a month.

  • In the weeks heading into the June meeting, the futures were oscillating between a pause, a hike, and possibly even a cut. A stream of stronger economic data led experts such as Mohamed El-Erian to forecast that the Fed would continue to raise rates for at least another meeting and perhaps longer. Not the Superforecasters. They have been saying since 2 April 2023 that the Fed would most likely hit pause—a view that, once again, eventually became the consensus.

When comparing the forecasts of two groups—Good Judgment’s Superforecasters and the futures markets using the CME FedWatch Tool—for the last four Federal Reserve meetings, the Superforecasters assigned higher probabilities to the correct outcome. They were 66% more accurate than the futures (as measured by Brier scores) and had lower noise in their forecasts (as measured by standard deviation).

See our new whitepaper for details. We also provide subscribers with a full summary of all our active Fed forecasts, which is updated before and after each meeting (available on request). 

Good Judgment’s Superforecasters have been providing a clear signal on the Fed’s policy well before the futures and many market participants. Subscribers to FutureFirst™ have 24/7 access to evolving forecasts by the Superforecasters on questions that matter, including Fed policy through the rest of the year and beyond, along with a rich cross-section of other questions crowd-sourced directly from users, including questions on Ukraine, China, and the upcoming US elections.

Superforecasters: Still Crème de la Crème Six Years On

Superforecasters: Still Crème de la Crème Six Years On

The multi-year geopolitical forecasting tournament sponsored by the research arm of the US Intelligence Community (IARPA) that led to the groundbreaking discovery of “Superforecasters” ended in 2015. Since then, public and private forecasting platforms and wisdom-of-the-crowd techniques have only proliferated. Six years on, are Good Judgment’s Superforecasters still more accurate than a group of regular forecasters? What, if anything, sets their forecasts apart from the forecasts of a large crowd?

A bar graph showing the Superforecasters' error scores are lower than those of regular forecasters
From the paper: Superforecasters’ accuracy outstrips wisdom-of-the-crowd scores.

A new white paper by Dr. Chris Karvetski, senior data and decision scientist with Good Judgment Inc (GJ Inc), compares six years’ worth of forecasts on the GJ Inc Superforecaster platform and the GJ Open public forecasting platform to answer these questions.

Key takeaway: Superforecasters, while a comparatively small group, are significantly more accurate than their GJ Open forecasting peers. The analysis shows they can forecast outcomes 300 days prior to resolution better than their peers do at 30 days from resolution.

Who are “Superforecasters”?

During the IARPA tournament, Superforecasters routinely placed in the top 2% of accuracy among their peers and were a winning component of the experimental research program of the Good Judgment Project, one of five teams that competed in the initial tournaments. Notably, these elite forecasters were over 30% more accurate than US intelligence analysts forecasting the same events with access to classified information.

Key Findings

Calibration plot showing the Superforecasters are 79% closer to perfect calibration
From the paper: Regular forecasters tend to show overconfidence, whereas the Superforecasters are close to perfect calibration.

Dr. Karvetski’s analysis presented in “Superforecasters: A Decade of Stochastic Dominance” uses forecasting data over a six-year period (2015-2021) on 108 geopolitical forecasting questions that were posted simultaneously on Good Judgment Inc’s Superforecaster platform (available to FutureFirst™ clients) as well as the Good Judgment Open (GJ Open) forecasting platform, an online forecasting platform that allows anyone to sign up, make forecasts, and track their accuracy over time and against their peers.

The data showed:

  • Despite being relatively small in number, the Superforecasters are much more prolific, and make almost four times more forecasts per question versus GJ Open forecasters.
  • They are also much more likely to update their beliefs via small, incremental changes to their forecast.
  • Based on the Superforecasters’ daily average error scores, they are 35.9% more accurate than their GJ Open counterparts.
  • Aggregation has a notably larger effect on GJ Open forecasters; yet, the Superforecaster aggregate forecasts are, on average, 25.1% more accurate than the aggregate forecasts using GJ Open forecasts.
  • The average error score for GJ Open forecasters at 30 days from resolution is larger than any of the average error scores of Superforecasters on any day up to 300 days prior to resolution.
  • GJ Open forecasters, in general, were over-confident in their forecasts. The Superforecasters, in contrast, are 79% better calibrated. “This implies a forecast from Superforecasters can be taken at its probabilistic face value,” Dr. Karvetski explains.
  • Finally, the amount of between-forecaster noise is minimal, implying the Superforecasters are better at translating the variety of different signals into a numeric estimate of chance.

You can read the full paper here.

Where Can I Learn More About Superforecasting?

Subscription to FutureFirst, Good Judgment’s exclusive monitoring tool, gives clients 24/7 access to Superforecasters’ forecasts to help companies and organizations quantify risk, improve judgment, and make better decisions about future events.

Our Superforecasting workshops incorporate Good Judgment research findings and practical Superforecaster know-how. Learn more about private workshops, tailored to the needs of your organization, or public workshops that we offer.

A journey to becoming a Superforecaster begins at GJ Open. Learn more about how to become a Superforecaster.