If humanity avoids AI Doom until 2060, what will have contributed to this? [Resolves N/A, then re-resolves in 2060]
56
727
5.7K
resolved Jan 28
Resolved
N/A
Alignment is never "solved", but is mitigated well enough to avoid existential risk.
Resolved
N/A
Superintelligent AI is not developed by 2060
Resolved
N/A
Humanity first invents weaker AI, and through hands-on experience with them, learns the methods and develops the tools to align a much stronger AI.
Resolved
N/A
Superintelligent AI will be much less motivated than humans, and in any case will automatically have humanlike goals due to training on human data.
Resolved
N/A
AIs that are better than humans at most cognitive tasks are developed and become widespread, and the world still appears vulnerable in 2060, but humans are still alive at that time.
Resolved
N/A
Alignment ends up not a concern, AI turns out to naturally optimize for benign ends via benign means
Resolved
N/A
One or more smaller-scale disasters turn governments and public opinion against AI such that superintelligence is delayed enough for us to solve the alignment problem
Resolved
N/A
Multiple "unaligned" superintelligences are created, and while some of them want to cause AI Doom (directly or indirectly), the ensuing handshake-hypowar results in a mostly-aligned Singleton.
Resolved
N/A
It turns out that superweapons are very hard to create and so no superintelligence is able to pose a global threat through nanobots etc
Resolved
N/A
OpenAI ceases to exist before AGI is made
Resolved
N/A
Humanity naturally takes so long to create a superintelligence that other advancements happen first, which prevent AI Doom when a superintelligence is created
Resolved
N/A
By means other than AI (Engineered pandemic? Nukes? Automated warfare? Etc.) we kill enough people to set humanity's tech level back a bit/a lot.
Resolved
N/A
Multiple "unaligned" superintelligences are created, but none of them want to cause AI Doom.
Resolved
N/A
Humanity coordinates worldwide to significantly slow down the creation of superintelligence, buying enough time for other advancements that prevent AI Doom
Resolved
N/A
The core problems of alignment are solved by a company's efforts, like OpenAI's Superalignment
Resolved
N/A
Human intelligence augmentation is developed, which makes everything else easier
Resolved
N/A
Humanity is unable to create a superintelligence before 2060, despite generally trying to make smarter and smarter AI
Resolved
N/A
Restricting access to the weights (or equivalent) of the most powerful AI models
Resolved
N/A
Superintelligent AI is never empowered enough to become a serious risk (ie it's just used for specific tasks and not given enough agency to make it risky)
Resolved
N/A
Eliezer Yudkowsky

Many people are worried that in the next few decades humanity will create a superintelligence which brings extinction, enslavement, or some other terrible fate upon us in "AI Doom". This question asks how we avoided this by 2060, in the worlds that we did.

Please try to keep answers short and specific. Describe one thing, and describe it in a way that the average manifold user can understand. Don't assume your audience knows a bunch of very technical terms. Try not to present answers in a biased way like saying "We are lucky that [thing you think is unlikely] happens." Just say [thing happens].

If you have multiple things to say, say them in multiple submissions. Make as few assumptions as possible in each individual submission. You can elaborate in detail in the comments. It's better to submit something that doesn't overlap with existing answers too much, but submitting a much better version of an existing submission is also okay.


This question is one in a long line of similar questions in various formats. I think we mostly expect that humanity will not survive to resolve these questions, so they mainly represent the opinion of people willing to lock their mana up indefinitely. They also represent the opinion of people trading on other people's opinions in the short-term.

This questions tries a new way to incentivize long-term accuracy. In about a week, maybe a bit sooner if this question doesn't get much interest or a bit later if it gets more interest, this question will close. Then all answers will resolve N/A. All trades, profits, and losses will be reversed, and all mana returned.


If we all die and/or manifold ceases to exist, your reward will be the trader bonuses you got for submitting interesting answers. You'll also have bragging rights if your answer was voted high up before market close. I may also award bounties to very insightful answers.

If we survive until 2060, then I or another moderator will use the "unresolve" feature to undo the N/A resolution and put everyone's mana back into the market as it was at market close. All answers will then be graded for their general quality by the best experts I can find, and will resolve to % from 0 to 100. There will be a grading curve, so one can expect the answers to be graded relative to each other instead of being compared to a hypothetically perfect answer which was not submitted.

Hopefully, this format will function like a poll that is weighted by the amount of mana that people have and are willing to spend on it, and will produce more accurate results than an unweighted poll.

Please do not submit answers that are too similar to existing answers, or which are just bad jokes. I will N/A answers that I think are not worth the space they take up in the market. Good jokes may be allowed to stay up longer than bad jokes. This market will be unranked, so as not to disrupt Manifold Leagues.

I am open to suggestions for improving this format, and may update these rules within the spirit of the question.

Get Ṁ200 play money
Sort by:

Thanks for participating everyone! And sorry about the emails, wish those were grouped all together for unlinked multichoice markets.

This was a weird experiment, I'm not sure it generated that much insight but hopefully some value was gained nonetheless. Fingers crossed, I'll see you all in 2060 for the proper re-resolution! In the meantime, my new weird experiment of the week can be found here:

This market structure basically gives no consequences for shit takes, especially beliefs that people can profit off of IRL, like LeCun's "AI is a tool and always will be", or whatever shameless garbage the eleuther people are peddling this time around.

Yud's original market won't resolve but will still move, rewarding and punishing people based on how reality swings as the AI situation unfolds, and as more people join manifold and develop quantified track records on specific topics, and predictive bots move markets based on predictive analytics based on those quantified track records.

It's a fantastic idea but needs iteration, if there is a viable final finished form this is not it. Looking forward to participating in the next attempt too.

Unless humanity NARROWLY avoids it, it would be hard to tell what was the major factor.

Alignment ends up not a concern, AI turns out to naturally optimize for benign ends via benign means

Edited slightly per the submission guidelines to be a bit more neutrally phrased

bought Ṁ10 of Superintelligent AI ... YES

Don't loans mostly solve the problem of long-term trades tying up funds?

If I'm doing my calculations correctly, if you simply left this open till 2060, then in (for instance) 2 months time, assuming we all take our daily loans, then this market will only hold ~9% of our original investment, with the rest of it loaned back to us (and more to return to us over time).

Granted, you've designed this to return our mana to us faster than that, but I'm not entirely convinced that was necesarry.

@MatthewLeong no, because there's now a max leverage cap.

@MartinRandall would you want to ELI5?

Superintelligent AI is not developed by 2060

I think this is insanely underrated as a conjunction with "Humanity avoids AI doom until 2060"

30% chance ASI in <40 years AND alignment is solved? No way.

Humanity first invents weaker AI that can solve the core problems of alignment, and then creates an aligned superintelligence

I don't get this option or why it's so high. Why would an unaligned AI be able to solve alignment if it doesn't actually understand what humans want? And if it somehow could, why couldn't you give this exact same task to a superintelligence and have it align itself?

@Shump An unaligned AI may be able to understand what humans want "well enough". When we say alignment, we generally do not mean perfect accordance with human values. After all, humans do not know what we want and often disagree with each other. We mean something more like "don't kill everyone", and the issue is so much not understanding what that means as it is having strong assurances that the AI will act in accordance with its understanding of that in all possible scenarios. So the unaligned AI may get the concept of "don't kill everyone" and come up with clever ways to architect AI solutions that respect it along with convincing ways to prove it.

You could give the task to a superintelligence, but in that case you already took a big risk and the superintelligence might have killed you.

bought Ṁ35 of Humanity first inven... NO

@aashiq I'm sorry but I remain convinced that AI risk people do not understand how Machine Learning works. That's not how any of that works... But anyways, that's a long discussion and this is not the place for it.

@Shump FWIW, I agree with you in general disdain for them. Please consider the above to be my attempt to model their fears, rather than my own beliefs

@Shump There's no reason an AI needs to be aligned to human values to solve arbitrary problems which includes aligning an AI to any set of values.

bought Ṁ10 of Superintelligent AI ... NO

@Shump - I don't think AI risk people are necesarrily all relying on current Machine Learning techniques being what gives rise to AGI.

@MatthewLeong Valid, but that means that AGI would require a paradigm shift in AI. Current Machine Learning makes machines that are trained to solve specific tasks in a way that probably won't fit whatever AGI means.

This option is high because it’s literally OpenAI’s plan for how to solve the alignment problem.

@Shump I don't think current text models are trained to solve specific tasks. Communicating by text is a very general task.

@MartinRandall They are specifically trained to generate the next word in a way that fits human-written language (pretraining) or whatever humans like to see (RLHF). That's all they do.

@Shump

They are specifically trained to generate the next word

The next token, if we're being specific. A token can be all sorts of things, which is why LLMs can do things like write programs, and play chess.

bought Ṁ50 of Humanity first inven... NO

@aashiq > After all, humans do not know what we want and often disagree with each other.

This just means humans are unaligned, not that the problem is easier because it is easier/simpler for you to imagine it that way...

Eliezer Yudkowsky

I'm not sure how the future historians will grade this but I think it's a clever answer

bought Ṁ10 of Eliezer Yudkowsky YES

@Joshua https://manifold.markets/Alicorn/eliezer-yudkowsky-stock-permanent
The Yud permanent stock should probably have some relationship with this choice, and have more traders in general

bought Ṁ6 of The core problems of... YES

I worry that “superintelligence” is a bit of a nebulous benchmark

bought Ṁ10 of One or more smaller-... NO

@DavidJohnston A good point! I feel like the term is used enough that most people on manifold should be thinking of the same order of magnitude of intelligence, but I'm sure there are still many disagreements.

I'm open to putting an exact definition in the description if people have suggestions.

@Joshua The kind of case I worry about (resolution wise) looks like sustained widespread use of automated science and technology, but it’s basically recognisable as an extremely competent version of what people do today

sold Ṁ35 of Humanity naturally t... YES

@DavidJohnston I wouldn't call that superintelligence. And even if OpenAI comes out with GPT 6 and calls it superintelligence as a marketing gimmick and the term gets diluted, this market will still resolve according to the idea of superintelligence that most people use when talking about AI Risk.

If it's not intelligent enough that we have reason to worry that it could kill us all, I wouldn't call it a superintelligence. Still open to a more formal definition though.

@Joshua Let's say GPT-6 come out, and it's a multimodal LLM that's better than humans at every single task within its text, audio, and visual capabilities. Is that superintelligence? This hypothetical GPT-6 still won't be able to do things like "resist shutdown", "feel love", or "make friends" because these are things that LLMs are simply not designed to do.

I guess by your definition that's not superintelligence becuse it's not capable of killing us all. But that's kind of a weird definition. An augmented human might be superintelligent, but not capable of killing everyone.

bought Ṁ10 of Humanity is unable t... YES

@Shump I'm not saying I'm defining superintelligence as something that actually is capable of killing us all, just that it's smart enough that we have reason to worry.

Wikipedia just defines it as "a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds", which I think is enough of a definition for the market?

We can leave it to the far future resolvers to fairly decide what counts as "far surpassing", I think.

@Joshua - "If it's not intelligent enough that we have reason to worry that it could kill us all, I wouldn't call it a superintelligence."

I do like that worknig defintion, at least for this question.

Like, avoiding AI doom because the candidate superintelligences are not artificially superintelligent enough to cause doom, just sounds like not even facing the risk of AI doom yet and having not made a superintelligence.

More related questions