How to predict the future and save the world in 3 easy steps

Posted on Posted in GJ Open

by Andrew Sabisky

Imagine you are presented with a cow, and an eccentric Englishman with an abnormally large head asks you to guess what its weight will be after it has been slaughtered and dressed.

This was the question that legendary polymath Sir Francis Galton asked of hundreds of people at a country fair in 1907. Galton found that while no individual guess was correct, the median and mean guesses were extremely close to the correct answer. This was one of the earliest and greatest works of scientific inquiry into the wisdom of crowds, and in 2016, we, Galton’s heirs, are asking you to form the crowd and allow us to learn from your wisdom. But now the stakes are not simply a prize at a country fair, but the future of humanity itself.

At Good Judgment Open, you will find many questions of enormous importance for you to forecast on, ranging from the progress of battles to future outbreaks of disease to national elections. We know from previous research that the wisdom of the crowd can have enormous predictive power, giving policymakers fresh insight into difficult high-stakes problems. You can help politicians avoid epic disasters, such as the war in Iraq, that were fundamentally caused by forecasting failure.

We also know that some individuals are consistently better at forecasting than others. According to the Washington Post, these amateur “Superforecasters” have outperformed seasoned intelligence community professionals by 30%. Are you a Superforecaster? There is one way to find out. Join the game.

At Good Judgment Open you will be scored on clear-cut questions that have binary yes/no answers, avoiding any room for fudging. Your forecasts will be judged using Brier scoring, a formula often used to rank weather forecasters. Lower scores represent more accurate forecasts. You will be asked to submit probability estimates on a scale of 0-100%, so be careful to be as granular as you can. There is a difference between a 50% probability and a 52% probability, and that difference matters. Do the best you can to reflect these subtle gradations when updating your forecasts, which you should do frequently as events unfold.

To illustrate the process, here is a graph of my forecasts on the presidential election, compared with the crowd on Good Judgment Open and the polls-plus FiveThirtyEight forecast. As you can see, I’ve been consistently more Trump-optimistic than either of the other two forecasts, and both my forecasts and that of FiveThirtyEight have shown fairly major swings in response to events and fluctuations in the polls (the crowd’s average forecast has been very stable; in a subsequent post I will explore why). This updating process is a key part of good forecasting; consistent re-evaluations keeps you cognitively flexible.

election_sabisky

The upcoming US elections are obviously front and center of world news. They have also generated some extremely difficult questions in the Good Judgment Open Monkey Cage US Election Challenge. Donald Trump and Hillary Clinton are candidates that tend to evoke strong emotional reactions, and it’s difficult to remain bloodlessly calm and dispassionate when forecasting on which will ultimately emerge victorious. Nevertheless you must do your best. We are asking you for your best independent estimate of who you think will win, and do not care about who you want to win.

There also many less high-profile but equally fascinating questions relating to the US election, mostly concerning various Senate races, many of which are extremely tight. Can you beat the betting markets and FiveThirtyEight? If not, can you help our crowd do so? Not everyone in a healthy forecasting ecology has to be a Superforecaster. We need some people to push the boat out and provide the crazy alternative point of view, and others who can dig out all the most obscure detail relevant to a problem.

Above all, we prize and treasure independent thinkers. Good Judgment Open adds no value if everyone simply copies the odds they derive from the betting markets, or the probabilities on FiveThirtyEight. The site allows you to explain the reasoning behind your forecasting, and we encourage you to do so and to engage with fellow forecasters in friendly debate.

Hopefully by now you understand how crowdsourced forecasting works, and why it matters. To start playing the only game that matters, here are those 3 easy steps I promised:

Go to gjopen.com.
Make an account.
Start forecasting.

May Wisdom go with you, for she is more beautiful than the sun, and above all the order of stars: being compared with the light, she is found before it.

Share on FacebookTweet about this on TwitterShare on LinkedInEmail this to someone

3 thoughts on “How to predict the future and save the world in 3 easy steps

  1. Well argued Andrew!

    U need to stop forecasting &, start making real return by beating the pants off the likes of “standard in the box” thinkers like 538!

    I have always enjoyed Ur feedback & views. Thx.

    From my perspective, as much as Americans deserve a hero, DT has proven himself time & again not up to the task. Not that HRC is much of an alternative but, I’ve ended up holding my nose &, giving DT about a 2/3% Chance.

    Am interested in seeing whether Women & Latinos make a clear statement tomorrow.

    B/y this, the hardest forecast is the Senate, which continues to be a dice roll. Imagine the FUN HRC has w/ a HOUSE & SENATE gone ROUGE aggghhhh GOP!😳🙈😱

    In any event, the end result w/ whomever, includes an outcome including SERIOUS gridlock, an unpatchable ( is that an English word??!) electorate, a major recession, a House ready to impeach a new President (??) after having over 50% of the electorate vote for her. In other words, business as usual in one of the still Most Powerful Countries on this Good Earth.

    @BG1

    Keep up Ur great work!

  2. Andrew: You note in your excellent article that, “You can help politicians avoid epic disasters, such as the war in Iraq, that were fundamentally caused by forecasting failure.” This has been a theme regarding the Good Judgment Project and IARPA’s very appropriate interest in improving intelligence forecasting.

    There’s no question that those who coordinated the invasion of Iraq did not accurately forecast many of the outcomes — such as the rise of a significant insurgency that resulted in the deaths and injuries to many U.S. military personnel and civilians — and which continues in a new form today.

    However, as you may know, there seems to be a longstanding debate as to whether the conclusion that significant WMDs were in Iraq (and were a significant threat) was 1) motivated by an honest lack of accurate intelligence or 2) the WMD “intelligence” was known to be questionable and was about simply finding a good reason to justify the invasion that the American public and international community would go along with. Or, was it some blending of the two?

    Although Good Judgment may primarily just want to improve intelligence forecasting and educate people about its importance, it could appear to some people that Good Judgment is providing “cover” for those who decided to invade Iraq and coordinated that invasion.

    I wonder if it might be worth considering clarification of “help[ing] politicians avoid epic disasters, such as the war in Iraq, that were fundamentally caused by forecasting failure.”

Leave a Reply

Your email address will not be published. Required fields are marked *