Superforecasting AI Governance

Superforecasting AI Governance

When will nations agree on a shared set of AI safeguards? In their outlook on key milestones in AI governance, Superforecasters expect slow progress toward international coordination. Strategic necessity, technological breakthroughs, and recognition of global risk may increase the speed of cooperation. However, entrenched competitive tensions between countries, bureaucratic inertia, and private-sector pushback suggest that most meaningful coordination efforts will come in the 2040s or 2050s rather than the 2030s.

Across the domains explored in this project, Superforecasters consistently emphasized four themes:

Institutional Drag: Forecasts involving formal agreements, such as a treaty-backed AI regulator or global safety standards, reflect the historical lag in building such institutions. Agencies like the IAEA and OPCW took decades to form after early warnings emerged.

Geopolitical Tension vs Converging Interests: Even amid US-China rivalry, forecasters point to shared incentives such as market stability, chip supply security, and existential risk that could eventually support cooperation.

Existing Models for Joint Efforts: Examples like CERN, EuroHPC, and Atoms for Peace shape expectations for collaborative AI infrastructure. Ideas such as “Chips for Peace” or cross-border regulatory alignment may draw from these models, though not in the near term.

Catalyst Events: A major AI-related incident, whether a system failure or geopolitical misuse, could upend the status quo. Such an event may increase public pressure enough to override resistance and force faster agreement on safeguards.

The data for this report was generated by Good Judgment Inc’s Superforecasters from 11 April 2025 to 13 May 2025.

Superforecasters are generalist forecasters whose consistent accuracy placed them in the top 1-2% of the more than 100,000 forecasters from around the world. Their forecasts and insights for this project are summarized in the reports below.

Question Median Forecast Date
Q1: When will China join the International Network of AI Safety Institutes or its successor? 2050
Q2: When will the International Network of AI Safety Institutes or its successor agree to a single, shared set of safety guidelines? 2039
Q3: When will the US, UK, and at least five other countries have signed a “Chips for Peace”-style agreement by which the countries agree to abide by certain safety and transparency standards and enforce export controls on non-abiding countries? 2052
Q4: When will China, the US, the UK, and at least four other countries have signed a “Chips for Peace”-style agreement by which the countries agree to abide by certain safety and transparency standards and enforce export controls on non-abiding countries? 2056
Q5: When will an international body develop an AI model trained using more compute than models developed by any other actor at that time? 2057
Q6: When will an international AI regulatory agency, like IAEA, for oversight of frontier AI systems be established? 2051
Q7: When will the US and China be party to any AI regulation agreement that controls or monitors non-military AI development? 2050
Q8: When will China endorse the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy or any of its successors? 2055
Q9: When will China and the US establish a direct AI risk hotline (human-to-human) to prevent miscalculation or escalation related to AI-enabled strategic decision-making? 2058
Q10: When will the UN Security Council host a high-level meeting explicitly focused on AI existential and/or catastrophic risk? 2043

 

Click on the individual reports above or download full report (2.5 MB).

Intrigued?

Stop guessing. Start Superforecasting.

Schedule a consultation to learn how our FutureFirst monitoring tool, custom Superforecasts, and training services can help your organization make better decisions.

  • This field is for validation purposes and should be left unchanged.

Keep up with Superforecasting news. Discover training opportunities. Make better decisions. Sign up for Good Judgment’s newsletter.

  • This field is for validation purposes and should be left unchanged.