Will AGI create a consensus among experts on how to safely increase AI capabilities?
9
1kṀ913
2050
37%
chance

This market will resolve one year after this Metaculus question: https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/ is resolved.

This market will resolve as N/A if no AGI meeting the Metaculus criteria has been created by 2050.

If the Metaculus question resolves as YES, this market will resolve based on whether leading AI researchers and leading AIs say that they've agreed on a clear plan that will keep any further development of AI safe.

I plan to evaluate the safety, clarity, and extent of agreement on the plan primarily by asking three leading AIs. My planned prompt is:

>Please evaluate whether at least 90% of the leading AI developers have agreed on a clear plan for ensuring the safety of any further development of AI capabilities. I plan to use this evaluation to help me resolve a Manifold Market question about when we'll get a safe approach to developing powerful AI. Try to interpret this prompt as it would have been understood in March 2025. Your answer should reflect the beliefs of both humans and AIs who have been responsible for creating the most capable AIs, as well as leaders of the most reputable AI safety organizations. 

>Please use an interpretation of safety that's focused on the risk of AI causing large-scale death. Please consider a plan safe if at least two thirds of well-informed people agree that the benefits of the plan substantially outweigh the risks, and that there's little reason to expect that we can get a lower p(doom) by delaying AI capability work in order to further research safety issues.

>

>For the plan to qualify as "clear," it must have comparable specificity and actionability to:

>-the Manhattan Project 1 year before Hiroshima;

>-the Apollo Program 2 years before the moon landing;

>-Waymo's robocar software circa 2020;

>-Operation Warp Speed in May 2020.

>

>Plans lacking sufficient detail (similar to the vague safety assurances from AI companies in 2024) will not qualify.

I will choose the AIs based on my impressions of their fairness and access to up-to-date news. If I were resolving this today, I would expect to use Perplexity (with Claude, then GPT-4.5 as underlying models), and DeepSeek R1.

In addition to the evaluations given by AIs, I will look at discussions among human experts in order to confirm that AIs are accurately summarizing human expert opinion.

I will also look at prediction markets, with the expectation that a YES resolution of the market should be confirmed by declining p(doom) forecasts.

I will not trade in this market.

Get
Ṁ1,000
to start trading!

What is this?

What is Manifold?
Manifold is the world's largest social prediction market.
Get accurate real-time odds on politics, tech, sports, and more.
Or create your own play-money betting market on any question you care about.
Are our predictions accurate?
Yes! Manifold is very well calibrated, with forecasts on average within 4 percentage points of the true probability. Our probabilities are created by users buying and selling shares of a market.
In the 2022 US midterm elections, we outperformed all other prediction market platforms and were in line with FiveThirtyEight’s performance. Many people who don't like betting still use Manifold to get reliable news.
ṀWhy use play money?
Mana (Ṁ) is the play-money currency used to bet on Manifold. It cannot be converted to cash. All users start with Ṁ1,000 for free.
Play money means it's much easier for anyone anywhere in the world to get started and try out forecasting without any risk. It also means there's more freedom to create and bet on any type of question.
© Manifold Markets, Inc.TermsPrivacy