Will AI wipe out humanity before the year 2100
688
10K
1.9K
2101
16%
chance
This market will resolve to no if AI has not wiped out humanity by the year 2100. If AI has wiped out humanity, the market resolution is on them. I made this bet with @tomchivers on the Browser Bets YouTube show. You can join in here: https://www.youtube.com/watch?v=Ak6HxFREPyc&t=1s
Get Ṁ500 play money

Related questions

Sort by:
Simon74fe avatar
Simonpredicts YES

fb68 avatar
Юрий Бурак

If yes, would anyone payout me the betted mana then?

BenjaminIkuta avatar
Benjamin Ikutabought Ṁ100 of NO
BenjaminIkuta avatar
Benjamin Ikutapredicts NO

@BenjaminIkuta Dangit, Manifold doesn't play the GIF.

AsadHeera2633 avatar
Asad Heerabought Ṁ10 of NO

Experts are greatly debating on whether the AI can wipe out humanity by 2100. The information given above indicates that there are opposing views. According to the Existential Risk Persuasion Tournament, AI proponents calculated a probability of about 3% that artificial intelligence will exterminate humanity within the next hundred years; however, “super-forecasters” gave the odds as low as 0.38%.

These are just estimates, however, based on existing knowledge and perceptions. The future evolution and consequences of AI rely upon several factors involving technology updates, ethics, legal systems, and how people welcome its use.

Researchers, medical practitioners and policy makers are working closely with their organization's to make sure that AI is developed responsibly and safely. The potential risks surrounding robust AI-based systems exist, but predicting their impact on mankind in the long run remains an impossibility.

jack avatar
Jackpredicts YES

Made a related question that is structured better to avoid some of the problems of this one:

IsaacKing avatar
Isaac Kingpredicts NO

@jack I think the probability that AI kills all humans conditional on killing at least 10% of them is upwards of 90%, so my incentive on this market is still to bet it down pretty low even though I think the actual probability is much higher.

This is better for people who assign a lower conditional probability though.

MaxMorehead avatar
Maxbought Ṁ50 of NO

Metaculus has human extinction from all sources at only 1.5%. https://www.metaculus.com/questions/578/human-extinction-by-2100/

jack avatar
Jackpredicts YES

@MaxMorehead Metaculus predictions on this particular question, and on several other related questions in the topic, are terrible for a variety of reasons, it's been discussed before.

For a more reasonable forecast, check out https://samotsvety.org/blog/2022/09/09/samotsvety-s-ai-risk-forecasts/ which forecasts 25% chance of misaligned AI takeover by 2100.

MaxMorehead avatar
Maxpredicts NO

@jack That aggregation forecasts "AI takeover". I'm not sure exactly what their definition of that term is (do I have to read WWOTF to get it?), but I assume it incorporates situations in which humans lose control of the future but are nonetheless not fully extinct.

Secondly, I included the Metaculus prediction not necessarily to argue that it's right, but argue that the current odds in this market are ridiculously far off from "mainstream" opinions. I'd think that the average Metaculus user would be more concerned with existential risk than the average knowledgeable person, and Metaculus generally predicts fairly fast AGI timelines, and still the prediction on AI-caused extinction is not even close to 15%.

The problem with forecasts on AI-risk from within the EA AI safety community is that there is a strong selection bias. The only people who "have done the equivalent work of reading the Carlsmith report" are people who are already quite concerned about AI ex-risk. The strain of thought that says "AI is likely to kill us all" has not yet had a large enough effect on company and government policy to merit well-researched responses from people who disagree with it (though that's probably changing).

In my opinion, without getting into particular arguments, I'd expect the "mainstream" to vastly underestimate AI ex-risk, but I'd also expect people in EA-AI circles to systemically overestimate the risk. So, I think the fact that even the prediction you linked makes 15% look like a fair price (when incorporating the probability of non-extinction takeover, and pre-TAI catastrophe) is motivation for me to buy NO.

jack avatar
Jackpredicts YES

@MaxMorehead To set context, my goal here is to predict the questions we care about, it isn't to try to bet this particular market to the "correct price" (there are many reasons the price here is not a reflection of predicted probability, in any case). Also, if the probability is even 1%, we should be taking it massively more seriously than we are now.

I don't know the exact definition of AI takeover but yes, it certainly includes loss of control without extinction. I think that is the more relevant question, as most such scenarios are also very bad for humanity, I believe.

Yes, the predictions of the mainstream on this question are very low. Is it worth considering? Yes. Are they right? I think not. A lot of the quality of my predictions comes from deferring to other forecasts, and judging the correct amount to defer to them. I put very little stock in the "mainstream" take here. The typical mainstream portrayal of the issue is such a poor caricature of the real issues, imo.

The predictions from EAs who don't work on AI safety are also high, it's not just the AI safety folks.

26a0 avatar
Alena Shokholovabought Ṁ50 of NO

No, the notion that AI will lead humanity to extinction in such a short period of time is highly unlikely and largely a topic of science fiction.

Researchers, policy makers, and organizations are actively working on ethical AI development, safeguards, and regulations to ensure that AI technology isused responsibly for the benefit of humanity. While AI hasits challenges, it also offers opportunities to solve complex problems and improve our quality of life if used wisely and with appropriate safeguards.

AlexandreK avatar
Alexandre Kpredicts NO

@26a0 I too would predict no, but none of those regulations and ethical guidelines mean anything when anyone can have access to an open-source model of just about any kind (which is increasingly becoming true).

Joern avatar
Jörnpredicts YES

@26a0 what is the most impressive (from a science or engineering standpoint) mental task that you expect an AI with or without ethical guidelines to be able to do in 2100? What's the least impressive mental task it won't be able to do on 1.1.2024?

chengranshi avatar
chengran shi

@26a0 We agree with the analysis and would have made the same bet. I believe it is essential to approach this question with a nuanced perspective as done in the comment.  

According to an article on Freethink.com, “an Existential Risk Persuasion Tournament, held by the Forecasting Research Institute collected predictions from technical experts and “super forecasters” (people with a track record of making accurate predictions) about various existential threats — ranging from AI to nuclear war to pandemics.) And the threat of AI was the most divisive topic. AI experts said there is a 3% chance that AI will kill humanity by 2100, according to their median response. Superforecasters were far more optimistic, putting the odds of AI-caused extinction at just 0.38%. The groups also submitted predictions on the likelihood that AI will cause a major catastrophe, even if it doesn’t wipe us out entirely (defined as causing the death of at least 10% of humans over a 5-year period). “ 

As seen from the above article, there has still been no clear consensus among experts regarding the timeline and likelihood of AI causing harm to humanity. Some experts believe that AGI (Artificial General Intelligence) with the potential to pose existential risks may not be realized for many decades or even centuries. 

Additionally, AI has played an instrumental role in bringing about healthcare advancements. AI’s development is not solely focused on destructive capabilities. 

“In summary, predicting whether AI will pose an existential threat to humanity by 2100 is uncertain and highly debated. While there are valid concerns, there is also a strong emphasis on responsible AI development and ethics. The future of AI's impact on humanity will depend on the decisions and actions taken by researchers, developers, policymakers, and society as a whole.” (Forecasting Existential Threats, 2023) 

By Team 4: Andrés Tocaruncho, Andrea Montserrat Alvarez Rodriguez, Cesar Ruiz, Chengran Shi, Guilherme De Souza, Komal Ghazanfar

GG avatar
GGpredicts YES

What did we learn in July?

NiplavYushtun avatar
Niplav Yushtunpredicts YES

@GG Unresolvable markets tend to become Keynesian Beauty Contests towards the default option, I guess?

jeremiahsamroo avatar
Jeremiahsold Ṁ37 of YES

holy I've lost so much mana on this market

FlorisvanDoorn avatar
Floris van Doornpredicts NO

@jeremiahsamroo You bet that there is a 42+% chance that humans get wiped out this century... Bad bets indeed cause you to lose mana.

jeremiahsamroo avatar
Jeremiahpredicts YES

@FlorisvanDoorn I was market making when the influx of LK-99 members caused this market to dip. I held onto my investment bc of the sunk cost fallacy. I am very anti AI doom.

Fedor avatar
Fedorpredicts YES

@jeremiahsamroo 2100 is a long time for it to surge again, you can just keep HODLing :P

MartinRandall avatar
Martin Randallpredicts YES

@jeremiahsamroo Most people are anti AI doom. It really needs a rebrand to a friendlier name.

jeremiahsamroo avatar
Jeremiahpredicts YES

@Fedor 🫡 that's the plan, hodl until some big new AI development causes people to panic buy yes

MartinRandall avatar
Martin Randallpredicts YES

@IsaacKing says the probability on this market is "obviously wrong".

/IsaacKing/will-manifold-stop-having-obviously

Will this move the market?

ShadowyZephyr avatar

@MartinRandall Don’t think it’s so wrong anymore. It used to be very wrong at 45%.

jakgnfdaghfjkahg avatar
shgfshggsgshgf

@ShadowyZephyr Yeah, I agree. I'd put the probability at 10%, but predictions this far out have a huge margin of error and I don't think 17% is unreasonable. 45%, though...

jack avatar
Jackpredicts NO

Even 45% doesn't seem "obviously wrong" imo. Many forecasters think that it is far, far easier to develop an unaligned superintelligence than an aligned one, and that most unaligned superintelligences would have catastrophic consequences for humanity, and that a superintelligence is likely to be developed by 2100.

I'm a bit more optimistic than 45%, but not that much more.

IsaacKing avatar
Isaac Kingpredicts NO

I meant "obviously wrong" in the sense of economic incentives, not actual predictions. The correct probability for this question in a market of homo economicus is 0%, regardless of how likely AI actually is to wipe our humanity.

BTE avatar
Brian T. Edwardspredicts YES

@jack So is it fair to assume you think AI development should be paused indefinitely?

jack avatar
Jackpredicts NO

@IsaacKing Sure, in a world of perfectly rational economic actors (plus some other assumptions about things like not caring about information signaling etc), this market wouldn't exist because nobody would bother to buy YES. But I don't think that's the point here, and that's different from saying that the economic incentives are all to NO - because the real world is not that world. It is true that one nash equilibrium for this question is 0%, but it is not the only equilibrium, and it is not the one we should expect to see in the real world.

@BTE I think we should be investing massively more in AI safety. I think pausing AI development is impractical; I'm unsure whether or not it would be a good idea in theory but in practice it doesn't really matter.

IsaacKing avatar
Isaac Kingpredicts NO

@jack If Manifold were to suddenly catch on among wall street traders, this question would rapidly converge to 0. The fact that it hasn't is an idiosyncrasy of Manifold's current user base, and won't reflect anything generally true about the world other than by coincidence.

jack avatar
Jackpredicts NO

@IsaacKing It would likely do that but it's by no means certain - look at meme stocks as a counterexample. The "idiosyncrasy" isn't limited to Manifold, it's more general than that.

MartinRandall avatar
Martin Randallpredicts YES

@IsaacKing We do have a few quants and other sophisticated traders and they are not buying this market down to 0%. Mira and others have explained some of the reasons why below.

StevenK avatar
Stevenpredicts YES

@IsaacKing In addition to what Jack said, there's some chance that AI wipes out humanity and there continues to be a chain of transactions between agents who care about Manifold resolutions, so I don't agree that the "correct" solution is quite 0% even given your particular choice about which perverse incentives to model. That said, I agree that the "correct" probability in your sense is probably a lot lower than the "true" probability of the event happening. I have other disagreements with your notion of the "correct" probability that I think are more major; I'm still thinking about how to present them in a way that has net good consequences.

jakgnfdaghfjkahg avatar
shgfshggsgshgf

@MartinRandall While I agree there's various reasons this market won't hit 0 prior to resolving, I think liquidity probably has something to do with that as well. If I wanted to buy this market down to the percentage I thought was accurate - around 10% - I'd need to spend approximately one million mana, or $10,000. I would need to spend two thirds of my yearly graduate TA stipend on that. I'd need to be a multimillionaire before I ever even thought about spending that much on Manifold. Betting this market to zero would take $29,000. Similarly, I think that the infamous AI-kills-everyone-by-2030 market should be at 1%, but I can't afford to spend hundreds or thousands of dollars to bet it that low.

Meanwhile, if I wanted to completely fuck up the market's probability and set it to 50%, it would "only" take 100,000 mana. It would be ten times cheaper for me to set this market to a percentage that I think is absurd than it would be for me to set it to a percentage that I think is accurate. I also wouldn't be making great returns on NO since the price is already so low.

tl;dr The structure of Manifold makes it such that small, highly energized groups (e.g. AI doomers) can stop the probability of markets about unlikely events from reaching the true probability, because buying YES at low percentages (or NO at high percentages) is cheap, and the payoff can be huge (though in this case, what payoff...?)

MartinRandall avatar
Martin Randallpredicts YES

@evergreenemily Yes, I think that is another reason Isaac is wrong here.

IsaacKing avatar
Isaac Kingpredicts NO

I think we must be talking past each other somehow.