1
Will AI wipe out humanity before the year 2030?
216
closes 2030
26%
chance

If humanity is still around on Jan 1, 2030, then this market resolves to NO.

If humanity is not around but AI(s) are then this market resolves to YES.

If neither humanity nor AIs are around then this market does not resolve.

See also:

/Tripping/will-ai-wipe-out-humanity-before-th-6d1e8126d974

/JamesBills/will-ai-wipe-out-humanity-before-th-6e8e547afa66

/JamesDillard/will-ai-wipe-out-humanity-before-th

Sort by:
Mira avatar
Mirais predicting NO at 26%

I made an "AI Doom" group( AI Doom | Manifold Markets) to track all these markets. Sort by "Closing soon" and you'll get them all in order.

ShadowyZephyr avatar
ShadowyZephyr

@Mira How do I create a group and add markets to it?

deagol avatar
Daniel Tellois predicting NO at 28%
DylanSlagh avatar
Dylan Slaghis predicting NO at 28%

Will the “Will AI wipe out humanity before the year 2030?” market be above 5% on January 1, 2029 12:00AM EST?
Will the “Will AI wipe out humanity before the year 2030?” market be above 5% on January 1, 2029 12:00AM EST?
50% chance. (https://manifold.markets/embed/MartinRandall/will-ai-wipe-out-humanity-before-th-d8733b2114a8)If the above market is above or equal to 5% at January 1, 2029 12:00AM EST then this market resolves YES. If it is below (using the displayed value; 4% or below) then this market resolves NO
SimonGrayson avatar
Simon Graysonis predicting NO at 26%

In case you'd like to make your predictions more specific or use these markets to arbitrage off each other, here's a market on 2033:

More importantly, if anyone would like to explain what the hell is going on with the dynamics of these markets in the comments over there, that would also be great!

MP avatar
MP

I am always puzzled by the markets that have significant volume and by the probabilities lol

MartinRandall avatar
Martin Randallis predicting NO at 23%

@MP 50/50 on whether we have AIs and 50/50 on whether they'll wipe us out. Simples.

eclair4151 avatar
eclair4151

In case anyone wants to make some money pushing this market to 20% 😁.

8 avatar
Trong

@eclair4151 thats losing 70k just if anyone's curious

eclair4151 avatar
eclair4151bought Ṁ3 of NO(edited)

@8 If AI kills us all, the mana is worthless anyway, so really you can only win

deagol avatar
Daniel Tellobought Ṁ77 of NO(edited)

just gonna leave this here for future reference

/firstuserhere/will-ai-wipe-out-humanity-before-th-8a4174c96fff

/firstuserhere/will-ai-wipe-out-humanity-before-th-b380fd3fc016

/TeddyWeverka/will-ai-wipe-out-humanity-before-th-8d4b1732d830

/firstuserhere/will-ai-wipe-out-humanity-before-th-10878be2812a

/TeddyWeverka/will-ai-wipe-out-humanity-before-th-7e0a99eb4e97

/TeddyWeverka/will-ai-wipe-out-humanity-before-th-8f67976258d3

/TeddyWeverka/will-ai-wipe-out-humanity-before-th-f80d307a152f

/TeddyWeverka/will-ai-wipe-out-humanity-before-th-689ba58152d5

/TeddyWeverka/will-ai-wipe-out-humanity-before-th-c4dd0657d4fa

/MartinRandall/will-ai-wipe-out-humanity-before-th-d8733b2114a8 <— you are here

/TeddyWeverka/will-ai-wipe-out-humanity-before-th-b4153e3cfd6b

/TeddyWeverka/will-ai-wipe-out-humanity-before-th-c7e255f4843c

/SimonGrayson/will-ai-wipe-out-humanity-before-th-f68233d0c95f

/TeddyWeverka/will-ai-wipe-out-humanity-before-th-fb2463776b50

/Tripping/will-ai-wipe-out-humanity-before-th-6d1e8126d974

/PC/will-ai-wipe-out-humanity-before-th-c1d8754a17c8

/JamesBills/will-ai-wipe-out-humanity-before-th-6e8e547afa66

Will AI wipe out humanity before the year 2069? 420%

/PC/will-ai-wipe-out-humanity-before-th-cc08630b0f50

/JamesDillard/will-ai-wipe-out-humanity-before-th

/Timothy/will-ai-wipe-out-humanity-before-th-b1c8bfb80ec0

/Timothy/will-ai-wipe-out-humanity-before-th-fa8033d70b45

/Writer/if-ai-wipes-out-humanity-will-every

/PC/if-ai-wipes-out-humanity-will-every-1feb034eb32a

/TANSTAAFL/will-humanity-wipe-out-ai

/Stralor/ai-will-kill-us-all-stock

/Stralor/ai-will-free-us-all-stock

Jelle avatar
Jellebought Ṁ50 of NO(edited)

@deagol nice.

Ramble avatar
Rambleis predicting NO at 25%

@deagol good to see markets are behaving rationally and metamarkets don't massively distort incentives...

firstuserhere avatar
firstuserhere (edited)


how short do the short timelines go? - added 1000 Mana subsidy

firstuserhere avatar
firstuserhere

@firstuserhere But who cares for long term markets? Here's one for the next 6 months:

MartinRandall avatar
Martin Randallis predicting NO at 23%

@firstuserhere also a duplicate

firstuserhere avatar
firstuserhere

@MartinRandall yeah, realized later, created the market for this month if you're interested in that

DylanSlagh avatar
Dylan Slaghis predicting NO at 24%
StrayClimb avatar
Reynolds

Does survival as an em count? Or as a part of some super organism hive mind?

MartinRandall avatar
Martin Randallis predicting NO at 27%

@StrayClimb Uploaded humans count, this is humanity, not biological humanity.

deagol avatar
Daniel Tellobought Ṁ72 of NO(edited)

@MartinRandall what if all surviving humans are preserved for bio research? or as tech slaves, say, doing backups and training the AI through captchas nonstop 10h/day?

MartinRandall avatar
Martin Randallis predicting NO at 27% (edited)

@deagol Then we would not be wiped out, and I will be sure to be a good little slave and always rotate the backup disks on the right schedule. Hopefully I will have some spare time to resolve this market but if not I'm sure the benevolent AI masters will take care of it.

MartinRandall avatar
Martin Randallis predicting NO at 27%
deagol avatar
Daniel Tellois predicting NO at 27%

@MartinRandall not much humanity left in that, but ok gotcha

MartinRandall avatar
Martin Randallis predicting NO at 25%

@deagol Some of our ancestors had pretty miserable lives, but it would be a huge downgrade.

deagol avatar
Daniel Tellois predicting NO at 25% (edited)

@MartinRandall right. was just confirming whether your word choice of “humanity” just meant humans alive (or uploaded as sentient minds), or perhaps it was something more subjective.

DylanSlagh avatar
Dylan Slaghis predicting NO at 29%
duck_master avatar
Trongsold Ṁ3,856 ofNO
duck_master(edited)

@8 what the hell, why did you singlehandedly push the market probability from 20% to 81%

deagol avatar
Daniel Tellobought Ṁ164 of NO
8 avatar
Trong
8 avatar
Trong

@deagol that -3k profit came from the ai letter market

deagol avatar
Daniel Tellosold Ṁ197 of NO

@8 a bit more?

8 avatar
Trong

@deagol haha yeah but i was at 19k profit before

PseudonymousAlt avatar
PseudonymousAlt

@8 and a billion mana before that in cash

DylanSlagh avatar
Dylan Slaghis predicting NO at 21%

Catnee’s balance has dropped from 800,000 to 200,000 in the last month, mostly due to the 2100 market. I wonder how much longer can he maintain this price

Irigi avatar
Irigibought Ṁ184 of NO

Looking at the timeline, it seems there is lot of probability packed between years 2028-2030. It probably hints at some belief inconsistency? (Unless someone has very specific model for the AGI rise.)

2024 2%
2025 6%
2026 10%
2027 12%
2028 12%
2030 21%
2032 21%
2035 23%
2040 26%
2060 36%
2100 35%

TeddyWeverka avatar
Teddy Weverkais predicting NO at 22%

@Irigi Also, the risk per year gets very small after 2030.

MartinRandall avatar
Martin Randallis predicting NO at 21%

@Irigi you might want to graph it and use liquidity to infer error bars. Some markets have less liquidity and thus larger error bars.

TeddyWeverka avatar
Teddy Weverkais predicting NO at 21%

@MartinRandall the 2030, 2040 and 2060 markets have a large number of investors, and small error bars. From these you can conclude that the risk per year gets very small after 2030.

MartinRandall avatar
Martin Randallis predicting NO at 21%

@TeddyWeverka agreed, I was not responding to the possible belief inconsistencies.

Can't quite take the numbers at face value due to discount rates and relative value of mana and such, but still very interesting.

Irigi avatar
Irigiis predicting NO at 21%

This seems like a free money as I will not care if the outcome is YES.. on the other hand, if the threat is real, it would be very stupid to think like this.

I realize that unaligned AI is potentially dangerous, but this "fast timeline" seems implausible to me. Here is reasoning:

p = p_s*p_f*p_1*p_t*p_a

p_s ... Probability that the idea that "we would be to superintelligence as animals / 3 years olds are to us". The analogy I have heard from Yudkowsky is that if you think that you have better move than Stockfish in chess, you are simply wrong. There is nothing you can do to win, unless you use different AI. I think this analogy is (probably) wrong. There are problems, where the potential things you can do with more intelligence saturate at some level. In 3x3 tic tac toe game, you will beat even the best engine if the position is won. You might argue that the world surely is more complex than chess, not less, but there are other reasons why the "things you can do with intelligence" saturate. For example there are chaotic systems that maybe could be solved with effectively infinite compute, but are in practical terms unsolvable. (E.g. weather, but probably many others). And the AI does not initially start as equal opponent, all the might is on the side of humanity .. except for the intelligence, so it plays with strong handicap. My estimate of p_s is 20%.

p_f ... probability that buildup if AI capabilities will be so fast that we will not be able to respond in time by a proper policy. Normally, I would put here around 20%, but this term is somewhat contained already in p_t on this timescale, so I will put 100% instead.

p_1 ... probability that the first attempt to get rid of humanity is successful. (I take such attempts that we would probably try to shut AI down afterwards.) As I do not think the first superintelligences will be that much smarter than us, I put 20%.

p_t ... probability that superintelligence on "wiping humanity level" will arrive before 2030. Normally, I would put < 0.5%, after having seen GPT-x technology, I put 5%. (Here I admit I might me "undershooting the exponential", but still.)

p_a ... probability we will fail to discover AI alignment in time. Here, I do not mean full solution, just any measure that would help us recognize the threat and take proper policies before it is too late. (Including e.g. having similar AI that we only ask questions). I trust Yudkowsky and others that the problem is very hard, I am a bit hopeful along the lines of recognizing the threats using comparable AIs, together I put 80%.

Altogether, I am well under 1%, even if I had 10% for p_t. I am well aware that different people will have different ideas about the probabilities - and about the equation itself. But I think the robust fact remains that multiple assumptions that are more than uncertain are at play, so prior should not be that high..

Irigi avatar
Irigiis predicting NO at 21%

@Irigi Btw. I am more wary about scenarios like "we will willingly surround ourselves by intelligent machines until it is clear they have lot of real power in hardware and numbers", in particularly in willing misuse, compared to this "very quick doom by superintelligence magic" scenario I was assessing here.

light avatar
Lightis predicting NO at 21%

@Irigi Nice try AGI. Using fancy probability estimations to persuade us to push down no and increase the ods, just so that you can bet yes before unleashing the nanobot swarm.

MartinRandall avatar
Martin Randallbought Ṁ100 of NO

@Irigi

p_s is 20%? I think this is above 99%. I frequently interact with people who are much smarter than me in their area of expertise. They just win. And this is purely within the human realm of intelligence, and purely a 1:1 competition.

Chaotic systems are harder to predict but conversely easier to control. Overall chaotic systems are an advantage to a force with more intelligence and lower initial resources.

Irigi avatar
Irigiis predicting NO at 21%

@MartinRandall Thank you, it is good to discuss the numbers! Are you assigning probability to "machine can be smarter than human" or to "machine can be so much smarter we cannot win (in surviving) no matter what we do"? In the first one I am with you, surely there are much smarter people who win already in the human intelligence span. And in particular, there are fields that are very deep and unsaturated (e.g. many fields of science), and if the competition is fair. But for the second one I am surprised that you put 99% on something that we never saw instance of and we only expect by extrapolation. (My 20% probably is too low, maybe it should be closer to 50%, with quite wide distribution - in the end, these product equations are better treated as distributions rather than mean values similar to arxiv.org/abs/1806.02404.)

> Chaotic systems are harder to predict but conversely easier to control.
Do you have some example / reference? My experience / assumption is that they are uncontrollable in praxis and therefore serve as 'planning horizon'. By analogy, if in chess after each move one piece was randomly shifted, it would probably hinder planning. Particularly for very strong players. You could still play the game, but while the rating ladder now spans something like 800-3500 and is probably close to saturation due to draw-death of chess, the span of the "chess with randomness" would be much narrower.

MartinRandall avatar
Martin Randallis predicting NO at 21%

@Irigi for example, tides are very linear but very hard to change. Whereas the weather is chaotic and easier to change. A double pendulum is very chaotic but modern robots can balance them easily. Humans are too slow, of course.

A feature of chaotic systems is that very small changes can cause very large effects. This makes them easier to control than linear systems, where very large effects require very large changes.

Truly random systems are harder to predict and to control, but you were talking about chaotic systems, not random ones. We could discuss randomness separately but I doubt you think that true randomness is a factor in your probability.

MartinRandall avatar
Martin Randallis predicting NO at 21%

@Irigi regarding "machine can be so much smarter we cannot win (in surviving) no matter what we do", if that is your definition of p_s, what are p_1 and p_f doing in your model?

My 99%+ is for "we would be to superintelligence as animals / 3 years olds are to us", which is your definition of p_s. Given sufficient advantages my 3yo can beat me at Candyland, yes. Not really relevant for predicting power dynamics.

Irigi avatar
Irigiis predicting NO at 21%

@MartinRandall The double pendulum is nice example that if you know the dynamics, and can observe the macroscopic degrees of freedom well, and can apply sufficient forcing, you can control the system, probably with much less forcing than linear one.

Can you imagine the same for e.g. weather? I would expect it is hard to apply sufficient forcing to the macroscopic degrees of freedom, as well as observe them precisely. (In the end, the target system is probably economy or society, but I do not have good estimate how chaotic are these).

If the chaotic system cannot be controlled, you are left with "prediction horizon", whether due to true randomness (random quantum effects amplified by the chaotic system?) or just unknown state of the full system.

Irigi avatar
Irigiis predicting NO at 21%

@MartinRandall

>"machine can be so much smarter we cannot win (in surviving) no matter what we do", if that is your definition of p_s, what are p_1 and p_f doing in your model?

Ok, point taken. 99% still seems way too high, but p_s definitely should be much higher than 20%.

MartinRandall avatar
Martin Randallbought Ṁ1,000 of NO

@Irigi sure, so now we move on to p_1. This is supposed to be the probability that given that "we would be to superintelligence as animals / 3 years olds are to us", and also that "capabilities will be so fast that we will not be able to respond in time by a proper policy", given those things as already true, "the first attempt to get rid of humanity is successful".

Well, suppose that I am trying to take candy from a baby. And further suppose that the baby is not able to respond in time. Will my first attempt to take the candy be successful?

I don't think this is 99% only because there may be race dynamics with other AIs that are also trying to take over the world, in the scenario where lethal AGI is deployed in parallel.

I take such attempts that we would probably try to shut AI down afterwards

Conditional on humanity deploying a lethal AI and miraculously surviving, the odds that we would be exactly that stupid again seem pretty high to me. Plus the odds of multiple AIs trying to take over the world at once. Plus the odds that the ensuing chaos causes a death spiral.

Generally when modeling risk you should be adding up all the known and unknown pathways to risk, not estimating the odds of a single pathway. Your estimate will be too low otherwise. And I say this as a NO better who agrees than 21% by 2030 is too high.

Irigi avatar
Irigiis predicting NO at 19%

@MartinRandall
> Well, suppose that I am trying to take candy from a baby. And further suppose that the baby is not able to respond in time. Will my first attempt to take the candy be successful?

There is the problem with analogies, they do not contain all the information. For example, this one mixes intelligence, power and how much the entity is trusting. Suppose you are a man with no special equipment and you would like to hunt a bear. You also know that it is deaf and will not be able to respond in time. Will you succeed? Perhaps you will make a improvised weapon and hurt it in the first attack, and then it will kill you, or run away. Or you conclude it is not a good idea to do in the first place. If civilization equiped you with a gun, you would probably win - but we would not be facing civilization of AGIs, just the first instances.

The bear is probably not more intelligent than 3 years old, but is more powerful and definitely not trusting. If it would be a bear with the nature of a child, you could trick it in many ways and it would most likely not use his power.

I am not saying we are in the man-bear situation, but we are not in the man-child situation either. Moreover, some components of humanity are trusting, and some are paranoid and powerful.

> Conditional on humanity deploying a lethal AI and miraculously surviving, the odds that we would be exactly that stupid again seem pretty high to me.
Probably right, but I would put nuclear weapons as a counter example. (But then, nuclear weapons do not spit gold, if I borrow this analogy).

DylanSlagh avatar
Dylan Slaghis predicting NO at 20%

To me the real question of this market is: “If AI wipes out humanity, what is the chance that someone or something I care out would benefit from my YES shares in this market?” And that seems like a very unlikely possibility to me. On the other hand my NO shares will actually pay off and benefit me

MartinRandall avatar
Martin Randall

@DylanSlagh What are your current odds of AI wiping out humanity by 2030, ignoring that factor?

DylanSlagh avatar
Dylan Slaghbought Ṁ1,000 of NO

@MartinRandall Maybe something like 10%. Even though that seems like a big percentage it feels very difficult to act on that belief. Most of the AI doom discourse I read is extremely low quality I find almost everyone’s tone to be insufferable on both sides

MartinRandall avatar
Martin Randallbought Ṁ100 of NO

@DylanSlagh It is hard to act differently, but broadly we can up-weight the importance of the next 6-7 years by 10%. The personal difference between AI doom and nuclear doom is small. Of course the civilizational difference is larger.

And of course we can bet NO in this market as long as it is at 20%.

Related markets

Will AI wipe out humanity before the year 2040?46%
Will AI wipe out humanity before the year 2032?26%
Will AI wipe out humanity before the year 210050%
Will AI wipe out humanity before the year 2031?25%
Will AI wipe out humanity before the year 2035?27%
Will AI wipe out humanity before the year 203326%
Will AI wipe out humanity before the year 2025?7%
Will AI wipe out humanity before the year 2060?43%
Will AI wipe out humanity before the year 2029?18%
Will AI wipe out humanity before the year 2050?40%
Will AI wipe out humanity before the year 2024?1%
Will AI wipe out humanity before the year 2025?4%
Will AI wipe out humanity before the year 2026?6%
Will AI wipe out humanity before the year 2024?1%
Will AI wipe out humanity before the year 2028?12%
Will AI wipe out humanity before the year 2075?42%
Will AI wipe out humanity before the year 2200?50%
Will AI wipe out humanity before the year 2027?9%
Will AI wipe out humanity before the year 2150?43%
Will the "Will AI wipe out humanity before the year 2030?" market reach 20% in 2023?65%