Superforecasts: PCE

Noise
A GJ Open Forecasting Challenge

 

The recruitment period for this forecasting challenge has now closed. Thank you to all of the participants!
Challenge questions will be launched to the public on 15 March.

 

Did you know emerging research has shown that the best forecasters are approximately half as noisy as the rest of the crowd? Aside from being well-calibrated and finding the most relevant sources of information, these best forecasters excel primarily because they’re able to tamp down the noise that’s ever-present in today’s world. Do you want to learn to reduce noise in your own forecasting practice?

With this goal in mind, Good Judgment Inc is pleased to partner with Olivier Sibony, thought-leading professor, industry pioneer, and one of the co-authors (together with Daniel Kahneman and Cass R. Sunstein) of Noise: A Flaw in Human Judgment, to bring the lessons from this critically acclaimed book to life in the Noise forecasting challenge. The most accurate forecasters within the challenge will win copies of Noise signed by Daniel Kahneman.

The challenge forecasting questions will launch in early March. We ask that interested forecasters sign up to participate by 3 March 2022 by selecting the appropriate link below. You will then be redirected to an introductory intake survey that allows you to reserve your spot in the challenge. For more details on how the challenge will work, please continue reading.

Are you new to Good Judgment Open?

Do you already have a GJO account?

How the forecasting challenge will work

The Noise Challenge will ask you to make a series of predictions about current topics on Good Judgment Open, home of the Internet’s smartest crowd™. There will be approximately 15 binary questions and the challenge will last roughly four months.

For this challenge, we’re trying something new in order to analyze different theories of noise reduction and study forecasting noise from different sources that are highlighted in the book Noise: A Flaw in Human Judgment.

To test these theories, participants will make two types of forecasts for every question. First, they will provide one-time forecasts within a survey in isolation from the crowd. After a week or so, they will provide forecasts on the same questions on Good Judgment Open, but this time with visibility of the crowd’s forecasts and rationales, and participants will update their forecasts until the questions resolve.

Overall, we hope to have fun while gaining a better understanding of the prevalence and sources of (as well as remedies to) noise within forecasting. At the conclusion of the challenge, all participants will also be able to attend a webinar with Olivier Sibony and the Good Judgment research team where we share what we learned.

Challenge Rules and Prizes

The forecasts from the survey (one-time forecasts and rationales on questions within a GJOpen survey in isolation from those of other forecasters) and the forecasts within the crowd (forecasts with possible updates on same questions but within Noise Challenge tab on GJOpen, and with visibility to the crowd’s forecasts and rationales) will be considered separate forecasts and thus scored independently using Relative Brier Score/Accuracy Score at the conclusion of the challenge. The scores from the crowd forecasting will show up on the challenge leaderboard, whereas the scores from the survey forecasts will be calculated by the Good Judgment Data Science team and not publicly reported. To be eligible for prizes, a forecaster must forecast on ALL questions (approximately 15 binary questions) in either the survey component or the crowd forecasting component.

Our prizes for this challenge are copies of Noise signed by Daniel Kahneman, and there are two chances to win! The most accurate eligible forecaster from the survey forecasting component and the most accurate eligible forecaster from the crowd forecasting component will each be mailed a signed copy of the book. In the unlikely event that there are ties for the most accurate forecaster from either component, Good Judgment reserves the right to randomly select a prize winner among the most accurate forecasters.

Finally, challenge participants can continue forecasting on Good Judgment Open to earn a tryout as a professional Superforecaster®.

About Olivier Sibony

Join the Noise forecasting challenge

 

Intrigued?

Stop guessing. Start Superforecasting.

Schedule a consultation to learn how our FutureFirst monitoring tool, custom Superforecasts, and training services can help your organization make better decisions.

  • This field is for validation purposes and should be left unchanged.

Keep up with Superforecasting news. Discover training opportunities. Make better decisions. Sign up for Good Judgment’s newsletter.

  • This field is for validation purposes and should be left unchanged.