If humanity avoids AI Doom until 2060, what will have contributed to this? [Resolves N/A, then re-resolves in 2060]
56
5.7kṀ12k
resolved Jan 28
Resolved
N/A
Alignment is never "solved", but is mitigated well enough to avoid existential risk.
Resolved
N/A
Superintelligent AI is not developed by 2060
Resolved
N/A
Humanity first invents weaker AI, and through hands-on experience with them, learns the methods and develops the tools to align a much stronger AI.
Resolved
N/A
Superintelligent AI will be much less motivated than humans, and in any case will automatically have humanlike goals due to training on human data.
Resolved
N/A
AIs that are better than humans at most cognitive tasks are developed and become widespread, and the world still appears vulnerable in 2060, but humans are still alive at that time.
Resolved
N/A
Alignment ends up not a concern, AI turns out to naturally optimize for benign ends via benign means
Resolved
N/A
One or more smaller-scale disasters turn governments and public opinion against AI such that superintelligence is delayed enough for us to solve the alignment problem
Resolved
N/A
Multiple "unaligned" superintelligences are created, and while some of them want to cause AI Doom (directly or indirectly), the ensuing handshake-hypowar results in a mostly-aligned Singleton.
Resolved
N/A
It turns out that superweapons are very hard to create and so no superintelligence is able to pose a global threat through nanobots etc
Resolved
N/A
OpenAI ceases to exist before AGI is made
Resolved
N/A
Humanity naturally takes so long to create a superintelligence that other advancements happen first, which prevent AI Doom when a superintelligence is created
Resolved
N/A
By means other than AI (Engineered pandemic? Nukes? Automated warfare? Etc.) we kill enough people to set humanity's tech level back a bit/a lot.
Resolved
N/A
Multiple "unaligned" superintelligences are created, but none of them want to cause AI Doom.
Resolved
N/A
Humanity coordinates worldwide to significantly slow down the creation of superintelligence, buying enough time for other advancements that prevent AI Doom
Resolved
N/A
The core problems of alignment are solved by a company's efforts, like OpenAI's Superalignment
Resolved
N/A
Human intelligence augmentation is developed, which makes everything else easier
Resolved
N/A
Humanity is unable to create a superintelligence before 2060, despite generally trying to make smarter and smarter AI
Resolved
N/A
Restricting access to the weights (or equivalent) of the most powerful AI models
Resolved
N/A
Superintelligent AI is never empowered enough to become a serious risk (ie it's just used for specific tasks and not given enough agency to make it risky)
Resolved
N/A
Eliezer Yudkowsky

Many people are worried that in the next few decades humanity will create a superintelligence which brings extinction, enslavement, or some other terrible fate upon us in "AI Doom". This question asks how we avoided this by 2060, in the worlds that we did.

Please try to keep answers short and specific. Describe one thing, and describe it in a way that the average manifold user can understand. Don't assume your audience knows a bunch of very technical terms. Try not to present answers in a biased way like saying "We are lucky that [thing you think is unlikely] happens." Just say [thing happens].

If you have multiple things to say, say them in multiple submissions. Make as few assumptions as possible in each individual submission. You can elaborate in detail in the comments. It's better to submit something that doesn't overlap with existing answers too much, but submitting a much better version of an existing submission is also okay.


This question is one in a long line of similar questions in various formats. I think we mostly expect that humanity will not survive to resolve these questions, so they mainly represent the opinion of people willing to lock their mana up indefinitely. They also represent the opinion of people trading on other people's opinions in the short-term.

This questions tries a new way to incentivize long-term accuracy. In about a week, maybe a bit sooner if this question doesn't get much interest or a bit later if it gets more interest, this question will close. Then all answers will resolve N/A. All trades, profits, and losses will be reversed, and all mana returned.


If we all die and/or manifold ceases to exist, your reward will be the trader bonuses you got for submitting interesting answers. You'll also have bragging rights if your answer was voted high up before market close. I may also award bounties to very insightful answers.

If we survive until 2060, then I or another moderator will use the "unresolve" feature to undo the N/A resolution and put everyone's mana back into the market as it was at market close. All answers will then be graded for their general quality by the best experts I can find, and will resolve to % from 0 to 100. There will be a grading curve, so one can expect the answers to be graded relative to each other instead of being compared to a hypothetically perfect answer which was not submitted.

Hopefully, this format will function like a poll that is weighted by the amount of mana that people have and are willing to spend on it, and will produce more accurate results than an unweighted poll.

Please do not submit answers that are too similar to existing answers, or which are just bad jokes. I will N/A answers that I think are not worth the space they take up in the market. Good jokes may be allowed to stay up longer than bad jokes. This market will be unranked, so as not to disrupt Manifold Leagues.

I am open to suggestions for improving this format, and may update these rules within the spirit of the question.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy