Superforecasting AI

Superforecasting AI

This project focused on individual reviews of Joseph Carlsmith’s paper, “Is power-seeking AI an existential risk?,” and featured forecasts on a dozen key questions about risks related to Artificial Intelligence (AI) and Artificial General Intelligence (AGI). It ran from August to October 2022, with a follow-up round in spring 2023, and was made possible with generous support from Open Philanthropy.

The project was divided into two phases.

Phase I, the Main Project, entailed an in-depth evaluation of Joseph Carlsmith’s paper. Good Judgment’s professional Superforecasters were asked to complete a standardized survey, which had also been distributed to a panel of AI experts. Good Judgment’s panel of Superforecasters was subdivided into two groups: those who self-identified as experts in AI and those who did not.

At the initial stage, 21 Superforecasters—10 Superforecasters with expertise on AI and 11 generalists without expertise on AI—worked individually to review the paper and answer a detailed questionnaire. Three Superforecasters preferred not to disclose their individual reviews; the others are provided in the links below. Following that stage, these 21 Superforecasters were joined by 12 colleagues and collaborated as a forecasting team on Good Judgment’s private platform to refine and update their judgments on the forecast questions. All of those reports are provided below.

The median probabilities for each of the seven questions and the 25%-75% quantiles as of 6 April 2023.

In Phase II, a Supplementary Project, the Superforecasters provided forecasts on the following four questions:
(1) Will AGI exist by 1 January 2043?
(2) Will AGI exist by 1 January 2070?
(3) Will AGI exist by 1 January 2100?
(4) Assuming that AGI exists by 2070, will humanity either go extinct or have had its future potential drastically curtailed due to loss of control of AGI by 2200?

Will AGI exist by 2043, 2070, or 2100? The median probabilities and 25%-75% quantiles as of 6 April 2023 suggest an increasing likelihood of AGI over the next 70 years with increasing variance/disagreement among Good Judgment’s Superforecasters. (AGI, as defined in this project, could be said to exist if “for any human that can do any job, there is a computer program…that can do the same job for $25/hour or less.” For a complete definition, please see the Supplementary Report.)
Assuming that AGI exists by 2070, will humanity either go extinct or have had its future potential drastically curtailed due to loss of control of AGI by 2200? Histogram of individual forecasts, with the dark blue line representing the median forecast.

The reports below present the forecasts alongside the key drivers and risks identified in each phase of the project.

Supporting Materials for Phase II: Supplementary Project

Superforecaster Reports

Intrigued?

Stop guessing. Start Superforecasting.

Schedule a consultation to learn how our FutureFirst monitoring tool, custom Superforecasts, and training services can help your organization make better decisions.

  • This field is for validation purposes and should be left unchanged.

Keep up with Superforecasting news. Discover training opportunities. Make better decisions. Sign up for Good Judgment’s newsletter.

  • This field is for validation purposes and should be left unchanged.