Testing Polymarket’s “Most Accurate” Claim

Testing Polymarket’s “Most Accurate” Claim

In the case of central bank forecasts, the claim does not hold up when compared to our panel of professional Superforecasters, writes Chris Karvetski, PhD, GJ Senior Data and Decision Scientist.

Figure 1. The averaged meeting Brier scores.

In a recent 60 Minutes segment, Polymarket CEO Shayne Coplan described his platform in sweeping terms: “It’s the most accurate thing we have as mankind right now. Until someone else creates sort of a super crystal ball.”

It’s a memorable line and an ambitious claim that, at least in the case of central bank forecasts, does not hold up when compared to our panel of professional Superforecasters.

Superforecasting emerged from more than a decade of empirical research, systematic evaluation, and the cultivation of best practices. Polymarket emerged from a very different ecosystem of venture capital, market design, and financial incentives. But origins, pedigrees, and resources ultimately do not decide accuracy. Head-to-head testing on matched forecasting questions does. Central bank rate decisions provide an ideal setting for such an evaluation, which is why we compared Polymarket and Superforecaster forecasts across the full set of 25 recent monetary policy meetings of the Federal Reserve, European Central Bank, Bank of England, and Bank of Japan for which forecasts from both platforms were available.[1]

For each meeting, we aligned forecasts on three mutually exclusive outcomes—raise, hold, or cut—and evaluated probabilistic accuracy using the Brier score, the standard scoring rule for such forecasts. Lower scores indicate better performance, yielding a clean, apples-to-apples basis for objective comparison across platforms.

We used two complementary approaches, both pointing in the same direction. First, across all 1,756 daily forecasts, Superforecasters achieved lower (i.e., better) scores on 76 percent of days, with an average daily score of 0.135 compared to 0.159 for Polymarket. In other words, the prediction market’s performance was about 18 percent worse. Second, to account for unequal forecast horizons across meetings, we averaged daily scores within each meeting and then averaged those scores across the 25 meetings. On this basis, Superforecasters achieved an average score of 0.102, compared to 0.126 for Polymarket, making Polymarket roughly 24 percent worse.

This pattern is consistent with prior evidence. Superforecasters have a documented[2] history of strong performance in central bank forecasting, including comparisons against futures markets and other financial benchmarks, with coverage in The New York Times[3] and the Financial Times[4]. Taken together, the evidence shows that when forecasting systems are evaluated head-to-head on the same questions using standard accuracy metrics, the Superforecasters’ aggregate forecast performs better in this domain than prediction markets, undercutting claims of universal predictive supremacy.

* Chris Karvetski, PhD, is the Senior Data and Decision Scientist at Good Judgment Inc


 

[1] Polymarket coverage was not uniform across all central bank meetings. For example, forecasts were available for meetings in March 2024 and June 2024, but not for the 30 April/1 May meeting. Our analysis includes all meetings and all forecast days for which both platforms provided data, without selectively excluding any overlapping observations.

[2] See Good Judgment Inc, “Superforecasters Beat Futures Markets for a Third Year in a Row,” 12 December 2025.

[3] See Peter Coy: “A Better Forecast of Interest Rates,” New York Times, 21 June 2023 (may require subscription).

[4] “Looking at the data since January [2025], it is clear that the superforecasters continue to beat the market.” Joel Suss, “Monetary Policy Radar: ‘Superforecasters’ tend to beat the market,” Financial Times, October 2025 (requires subscription to FT’s Monetary Policy Radar).

Keep up with the latest Superforecasts with a FutureFirst subscription.

Superforecasters Beat Futures Markets for a Third Year in a Row

Superforecasters Beat Futures Markets for a Third Year in a Row

Sure enough, the Fed cut rates again in December. Good Judgment’s Superforecasters had generally expected this outcome since the question was launched in October. They trimmed their odds briefly following conflicting statements from Fed officials, a situation compounded by the lack of data on how the economy fared during the shutdown. They later raised their confidence once again that there would be another cut.

Looking at the Brier scores (lower is better) for the Superforecasters and CME’s FedWatch for 2025 as a whole, Good Judgment emerges as nearly twice as accurate as the futures markets.

This time, we have added Polymarket to our analysis. The data shows the prediction market simply tracked CME pricing, volatility and all, adding little to no value for decision-makers.

Bottom line: The Superforecasters have shown less noise, reflecting genuine uncertainty in their forecasts when it was warranted, and they have now beaten CME’s FedWatch for the third year in a row.

Keep up with the latest Superforecasts with a FutureFirst subscription.

Interview with the winner of the “Right!” said FRED Challenge

Interview with the winner of the “Right!” said FRED Challenge

In this interview, we sit down with the winner of the “Right!” said FRED Challenge for Q2 2024, Sigitas Keras. Known on GJ Open as sigis, Sigitas is an experienced quant and trader who decided to explore the world of forecasting after an impressive 25-year career in finance. With a PhD in mathematics and a natural curiosity about the world, he shares insights into the unique challenge he has taken on to forecast every question on GJO in 2024 and the strategies that helped him excel on the platform. Originally from Lithuania, Sigitas currently lives in Canada.

GJO: What is your background, and how did you first become interested in forecasting?

I was born in Lithuania, have a PhD in maths, but, as many others with a similar background, ended up in finance industry. After almost 25 years as a quant and a trader, I recently retired, which freed up a lot of time for other things. I tried forecasting on GJO for the first time a couple years ago. It seemed like an interesting challenge where I could combine analytical skills and general curiosity about the world.

GJO: How did you learn about GJ Open? How would you describe your experience on the platform so far?

I read Tetlock’s book Superforecasting, so likely that was an initial prompt, but to be honest I don’t remember full details anymore. Rightly or wrongly, I am one of the few forecasters who decided to forecast every question in 2024. It was very enjoyable, and I feel I learnt a lot both about forecasting and about various topics, but I have to admit this is getting too difficult to maintain. I don’t think I’ll continue doing all questions next year, and most likely will just focus on a few challenges, but I still like to maintain a good mixture of various topics.

GJO: What was your approach to the “Right!” said FRED Challenge? What do you think helped you come out on top?

I like questions that have good supporting data. In that sense, the FRED challenge is perfect for me. Whenever there is good data available, I try to use some mathematical model. Having a background in finance industry helps a bit with that, although I don’t think I use anything that requires more than FRED and other publicly available data and a Google spreadsheet. I also try to update my forecasts regularly, typically once a week. I think consistency is another important component of successful forecasting.

GJO: What topics would you consider of particular interest to forecast for 2025 and beyond?

I tend to forecast better when there is good data available for analysis. On the other hand, geopolitical questions are often much more challenging, so perhaps I will focus on improving there. My goal is to improve my score in the Superforecasting Workshops challenge!

GJO: Is there anything you would like to add that would be of particular interest to other forecasters on GJ Open?

I feel I am still very new to forecasting and to the community. One thing I hope is to learn more about other forecasters, their backgrounds, their approaches to forecasting. And if anyone has any questions for me, feel free to reach out.

See the latest forecasting challenges on GJ Open and try your hand at forecasting!