Will artificial superintelligence exist by 2030? [resolves N/A in 2027]

Will continuing progress in AI capabilities result in the existence of unambiguous artificial superintelligence, clearly superior to humans and humanity at virtually every cognitive problem, by Jan 1st 2030? (Every problem on which it's possible to do better than humanity at all, eg, not Tic-Tac-Toe or inverting hashes.)

An artificial superintelligence like this would, according to some market participants' beliefs, probably kill everyone. This creates a market distortion because M$ is not worth anything if the world ends. Pleading with people to just bet their beliefs on this important question doesn't seem like the best possible solution.

This market resolves N/A on Jan 1st, 2027. All trades on this market will be rolled back on Jan 1st, 2027. However, up until that point, any profit or loss you make on this market will be reflected in your current wealth; which means that purely profit-interested traders can make temporary profits on this market, and use them to fund other permanent bets that may be profitable; via correctly anticipating future shifts in prices among people who do bet their beliefs on this important question, buying low from them and selling high to them.

For more on this see the paired question, https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r

Get Ṁ500 play money

Related questions

Sort by:
Odinyoda avatar
OdinYodabought Ṁ10 of NO

How can we discuss super intelligence when we still don’t have a full grasp of human intelligence. Also seems like there will be lots of hype like this but no stable (falsifiable) definitions of “super intelligence.”

MartinRandall avatar
Martin Randall

@BenjaminJensen it's in the question description: "clearly superior to humans and humanity at virtually every cognitive problem". That's plenty well defined to bet on.

EF avatar
E Fbought Ṁ400 of YES


JeronimoD avatar
JD bought Ṁ267 YES from 33% to 35%
SophusCorry avatar
Sophus Corrypredicts NO

I love that manifold is so biased towards AI progress being insanely fast from here because there are a lot of free mana markets (ASI within 7 years, realistic video from prompt in 2023) where even in the unlikely scenario that I'm wrong, the world will change so drastically that I won't care about my mana much anymore.

It's not even that I think ASI is impossible, 2030 just seems way too soon. It's about 6 years and it's already been 3 years since gpt-3

DavidBolin avatar
David Bolinpredicts NO

@SophusCorry People are biased because they were surprised when ChatGPT came out (it was not the GPT-2 or the initial version of GPT-3, nor did the surprise come with GPT-4).

So they expect to continue to be surprised in similar ways, and they cannot imagine such a sequence of surprises that would not lead to ASI in a short period.

(1) There are many such sequences that would not lead to ASI in a short period
(2) In reality we should expect reversion to the mean, not such a sequence in the 1st place

GeraldMonroe avatar
Gerald Monroe

@DavidBolin we also may slam into temporary logistical limits. The free market right now wants to build a lot more GPUs for AI accelerators but it will take months to build more using existing capacity and years to expand capacity for the types of memory and process nodes accelerators need.

Similar limits are on robots. This can add years, decades to timelines of exponential growth while capacity is added before true ASI is possible.

ChristopherKing avatar
The King

@EliezerYudkowsky FYI: I think if you're trying to extract info about whether AI kill us from prediction markets, a useful tactic would be to ask "When asked at the end of 2023, what credence will Yudkowsky put on AI doom by 2030?". If it's higher than your current value, the market is predicting bad signs will emerge in the future. If it's lower, the market is predicting good signs.

(I guess for a simple binary market you can split payoffs between options?)

Stralor avatar
Pat Scott🩴

A note on:

WRONG: Purely profit-interested traders can still make money on trading this market, by correctly anticipating the shift in trades among people who do bet their beliefs, so that you can buy low from them and sell high to them. CORRECT: All trades on the market will be rolled back on Jan 1st, 2027, apparently! Unless Manifold develops some other way to treat N/A markets before then. Sorry about that, I had not thought that was how it worked!

Trades, profit, and losses here will be rolled back, but the effects on traders' portfolios before this N/As will stay the same until then, so profits (and loans) can be leveraged forward in the interim elsewhere if cashed out.

whalelang avatar

A superintelligence by this definition implies it is better than humans at coming up with cognitive challenges, or specifically challenges that it knows humans would be better at- which leads to a contradiction if the set of cognitive challenges is infinite.

Maybe a more narrow definition like: “Achieves a perfect score on any novel, solvable educational test (AP exams, ACT/SAT, PhD qualification exams)” would encompass enough human cognitive parity that anyone would call it superintelligence.

levifinkelstein avatar
Levi Finkelstein

@whalelang "virtually every cognitive problem" given the "virtually" and a charitable interpretation I don't think it will cause any controversy as it's currently defined.

VitorBosshard avatar

Couldn't you just resolve the market to whatever probability it has when it ends? Pair this with a randomized end time to avoid last-minute sniping.

As is, the N/A resolution gives me zero incentive to bet either way, regardless of what I believe. A market that resolves to ending prob at least creates a self-fulfilling collective delusion.

EliezerYudkowsky avatar
Eliezer Yudkowskypredicts NO

@VitorBosshard How have PROB markets previously played out in practice, does anyone know? Did they stay semantically anchored?

VitorBosshard avatar

@EliezerYudkowsky I don't know, it's pure theory crafting from my side.

A avatar

@EliezerYudkowsky In the past they have definitely not stayed semantically anchored, regardless of whether they resolve to PROB, N/A, or some more complicated self-resolving mechanism. See for example the series of self-resolving "Is Joe Biden president?" questions: https://manifold.markets/MichaelWheatley/weighted-poll-will-biden-be-preside?r=QQ

levifinkelstein avatar
Levi Finkelstein

@A That market is so small though, there's like 4 people deciding most of the probability

levifinkelstein avatar
Levi Finkelstein

Is 40% your actual probability, or is it "market dynamics"?

EliezerYudkowsky avatar
Eliezer Yudkowskybought Ṁ14 of NO

@levifinkelstein It's over 39% and under 79% per my limit order.

levifinkelstein avatar
Levi Finkelstein

@EliezerYudkowsky so you think there's a less than 1% chance we solve alignment by then? Given how you have the same limit orders in both markets...

EliezerYudkowsky avatar
Eliezer Yudkowskypredicts NO

@levifinkelstein No, my limit order in the other market is 40-80 rather than 39-79.

Primer avatar
Primerbought Ṁ30 of NO

@levifinkelstein Chance of solving alignment + time for takeoff considerations will round up to 1%

levifinkelstein avatar
Levi Finkelstein

@EliezerYudkowsky ah, whoops, I think I had the same market up in 2 tabs.

@Primer 🙏