Keeping It Civil – Promoting an Open-Minded Dialog on Good Judgment Open

Keeping It Civil

A wise crowd encompasses diverse views.

Good Judgment Project research found that being an “actively open-minded thinker” is positively correlated with being an accurate forecaster. That’s no mystery. Exposure to views with which we disagree – even views that we find repugnant – can inform our understanding of the world in which we live. And the better we understand that world, the better we can project what will happen under various conditions.

On Good Judgment Open (GJO), our public forecasting platform, we strive to foster the candid exchange of opinions without personal attacks on others who express opposing views and without using profane language or offensive epithets. We do not censor comments simply because they express opinions with which we, individually or as a company, disagree.

We rely primarily on our forecasting community to let us know if any comments posted on GJO fall outside the reasonable bounds of civil discourse. Any forecaster can “flag” a comment, which triggers a review by our site administrators. We’re happy to report that flagged comments are rare; from time to time, however, we have deleted inflammatory remarks, especially ones that personally insult other forecasters, and have cautioned GJO forecasters to find ways to “disagree without being disagreeable.”

Can Forecasting Tournaments Reduce Political Polarization?

The relative rarity of nastiness on GJO may surprise those accustomed to the rough-and-tumble of the Twitterverse. But it’s little surprise to Good Judgment researchers. Our co-founder Barb Mellers launched a three-year, National-Science-Foundation-funded research program to investigate whether participation in forecasting tournaments could moderate political polarization. Specifically, Mellers and her co-authors Philip Tetlock and Hal Arkes wanted to explore:

How feasible is it to induce people to treat their political beliefs as testable probabilistic propositions open to revision in response to dissonant as well as consonant evidence and arguments?

Mellers, B., Cognition, https://doi.org/10.1016/j.cognition.2018.10.021.

They put this question to the test for two years in a forecasting tournament they dubbed the Foresight Project. This tournament went beyond prior Good Judgment Project research in that it included questions relating to controversies in US domestic politics and not just about geopolitical events elsewhere. That increased the likelihood that Foresight Project participants, who were predominantly US residents, would hold strong views related to the forecasting questions.

The Mellers et al. findings shed light on why the conversation on Good Judgment Open seems so civil compared to what we see elsewhere on social media.

Forecasting tournaments are not a panacea for what ails our political conversations. Even the most accurate forecasters – including some who earn the Superforecaster® credential – occasionally express their views in polarizing prose and endorse opinions that many consider to be immoral. In that respect, forecasting accuracy is like other forms of competence − in Phil Tetlock’s words, “[t]here is no divinely mandated link between morality and competence.”[1]

Nonetheless, we are optimistic that Good Judgment Open contributes to a more thoughtful public dialog and encourages our forecasters to listen carefully to points of view that they might otherwise dismiss. For example, we took great pleasure in hosting an “adversarial collaboration” challenge on the Iran nuclear deal, inspired by this New York Times op-ed co-written by Phil Tetlock and Peter Scoblic. We plan to engage GJO forecasters with more opportunities to test whether their views on controversial subjects lead to more or less accurate predictions.

As the 2020 election cycle intensifies, we forecast with near certainty (p>.99) that the public debate will grow even more heated and more personal. Together, let’s preserve Good Judgment Open as a place where facts and reasoned argument reign supreme. We have nothing to lose but our illusions.

[1] Tetlock, P., & D. Gardner (2015). Superforecasting: The Art and Science of Prediction. New York: Crown. P. 229.

How to Become a Superforecaster®

How to Become a Superforecaster®

A BBC listener asked the team behind their “CrowdScience” radio show and podcast whether she might, in fact, be a Superforecaster. She’s not the first to wonder if she would qualify – and not the first to be curious about how Good Judgment spots and recruits superior forecasting talent.

For all curious souls out there, here’s the inside scoop.

Superforecaster Origins: the Good Judgment Project

Superforecasters were a surprise discovery of the Good Judgment Project (GJP), the research-and-development project that preceded Good Judgment Inc. GJP was the winner of the massive US-government-sponsored four-year geopolitical forecasting tournament known as ACE.

As our co-founder Phil Tetlock explains in his bestseller Superforecasting: The Art and Science of Prediction, we set up GJP as a controlled experiment with random assignment of participants. Results from the first year of the tournament established that teaming and training could boost forecasting accuracy. Therefore, we selected GJP superforecasters from the top 2% of forecasters in each experimental condition to account for the advantages of having been on a team and/or received training. To minimize the chance that outstanding accuracy resulted from luck rather than skill, we limited eligibility for GJP superforecaster status to those forecasters who participated in at least 50 forecasting questions during a tournament “season.”

Professional Superforecaster Selection

When the ACE tournament ended in mid-2015, Good Judgment Inc invited the forecasters with the best-established track records to become the core of our professional Superforecaster contingent. Most of Good Judgment Inc’s 150+ professional Superforecasters qualified through their relative accuracy during GJP. They had not only earned GJP superforecaster status over one tournament season, but also confirmed their accuracy over 50+ questions in a second year of forecasting.

Good Judgment has continued to identify and recruit new professional Superforecasters since the ACE tournament ended via our public forecasting platform, Good Judgment Open. Many of these new recruits never had a chance to participate in ACE − and they have performed every bit as well as the “original” GJP superforecasters.

Do You Have What It Takes?

Each autumn, Good Judgment will identify and recruit potential Superforecasters from the ranks of GJ Open forecasters. If you think you have what it takes to be a Superforecaster, put that belief to an objective test: forecast on at least 100 GJ Open questions (cumulatively, not necessarily in one year). We look at many metrics, but the one we look at closest is average accuracy score per closed question. We also look at comment quality, because understanding the rationales for forecasts is important to our partners and clients. Collegiality is also important to ensure good teamwork with other Superforecasters. And it goes without saying, you’ll need to be in compliance with the GJ Open Terms of Service.

Those who consistently outperform the crowd are automatically eligible for our annual pro Super selection process and potentially a trial engagement. Those who successfully complete a three-month probation period will become full-fledged Superforecasters.

So, there you have it. Becoming a professional Superforecaster is the ultimate meritocracy. We don’t care where you live (as long as you have reliable Internet access!). When evaluating your qualifications for Superforecaster status, we ignore your gender, age, race, religion, and even education. (We are, however, keen to have an increasingly diverse pool of professional Superforecasters and encourage people of all backgrounds to test their skills on GJ Open!) We simply want to find the world’s most accurate forecasters – and to nurture their talents in a collaborative environment with other highly skilled professionals.

For those who are not invited, please continue on GJ Open and improve your forecast skills. We find the process often identifies GJ Open “veterans” who have recently elevated their performance into the top tier.

… If You’re Still Curious

While you’re pondering whether you have what it takes to be a Superforecaster, check out the BBC CrowdScience podcast episode that answers their listener’s question.

And, as always, you can visit our Superforecaster Analytics page to learn more about how Good Judgment’s professional Superforecasters provide early insights and well-calibrated probability estimates about key risks and opportunities to help governments, corporate clients, and NGOs make better decisions.