In a year, will I think that risk of AI apocalypse is between 1 and 10%?
66
closes Nov 15
52%
chance

Currently I'm at 7% before 2100. That's the 1-10% range here.

Nov 15, 4:36pm: In a year or less, will I think that risk of AI apocalypse is between 1 and 10%? → In a year, will I think that risk of AI apocalypse is between 1 and 10%?

Get Ṁ500 play money

Related questions

Will a parrot explain the risk of AI before 2026?
IsaacKing avatarIsaac
14% chance
Will a member of Congress be caught giving a speech generated by AI by Dec. 31?
FoxBot avatarFox News Bot
20% chance
In 2028, will AI be at least as big a political issue as abortion?
ScottAlexander avatarScott Alexander
36% chance
In 5 years, will we have overregulated or underregulated AI? (resolves to manifest poll)
NathanpmYoung avatarNathan Young
61% chance
Will AI be a major topic during the 2024 presidential debates in the United States?
MatthewBarnett avatarMatthew Barnett
28% chance
Will >$100M be invested in dedicated AI Alignment organizations in the next year as more people become aware of the risk we are facing by letting AI capabilities run ahead of safety?
BionicD0LPH1N avatarBionic
81% chance
Will an AI system be known to have resisted shutdown before 2024?
PeterWildeford avatarPeter Wildeford
16% chance
Will anyone very famous claim to have made an important life decision because an AI suggested it by the end of 2023?
IsaacKing avatarIsaac
19% chance
Will Tyler Cowen agree that an 'actual mathematical model' for AI X-Risk has been developed by October 15, 2023?
JoeBrenton avatarJoe Brenton
8% chance
Will it be public knowledge by EOY 2025 that a major AI lab believed to have created AGI internally before October 2023?
dmayhem93 avatardmayhem93
25% chance
Will AI outcompete best humans in competitive programming before the end of 2023?
Will Science's Top Breakthrough of the Year in 2023 be AI-related?
dp avatardp
34% chance
Will AI pass the Longbets version of the Turing test by the end of 2029?
dreev avatarDaniel Reeves
50% chance
Will Gallup's poll on America's most important problems have at least 1% of respondents identify AI by the end of 2023?
IsaacKing avatarIsaac
24% chance
Will there have been a noticeable sector-wide economic effect from a new AI technology by the end of 2023?
Nostradamnedus avatarNostradamnedus
15% chance
Will Biden sign an executive order primarily focused on AI in 2023?
SG avatarS G
49% chance
🐕 Will A.I. Be Able to Make Significantly Better, "Common Sense Judgements About What Happens Next," by End of 2023?
PatrickDelaney avatarPatrick Delaney
39% chance
Will I observe significant Negative Polarization around AI generated art in 2023?
LarsDoucet avatarLars Doucet
34% chance
Will artificial superintelligence exist by 2030? [resolves N/A in 2027]
Will any AI cause an international incident before August 2024? (M1300 subsidy)
Stralor avatarPat Scott🩴
32% chance
Sort by:
MartinRandall avatar
Martin Randall

Checking: this resolves NO if, at close, you think it is 10.01% likely that AI kills all humans by 2100? So this is a double-sided market where I might bet yes if I think you will decide it is much less likely or if you think it is more likely?

PatrickDelaney avatar
Patrick Delaney

Working from the Fermi Paradox perspective, nuclear apocalypse (little ol' Nukey-Nuke) is being time bound rated based upon percentage points per century. For example, there's a 1% risk of Nukey-Nuke before 2100, and perhaps a 1% risk of Nukey-Nuke between 2100 and 2200, etc. I'm not even going to attempt to define the Bayesian inputs and outputs of this system because I really reject online Bayesian posturing, it's often a misapplication of probability to sound smart and win an argument, so we're just going to hand wave and say we can build a stupid model that's cumulative, Nukey-Nuke by 2300 would be equal to 1%+1%+1% = 3%.

Likewise, it's reasonable to believe that even if AGI is in the realm of just outside of fantasy, you can at least put a sliding window over a century to create an estimate for when AGI destruction occurs, e.g. perhaps it's also 1% before 2100, or I don't know...maybe it's more like, 1% before the year 2300.

So it's reasonable to believe that there is a chance of complete human destruction by AGI within a certain timeframe - no knock on that, at all. I'm not going to attempt to reason what that percentage might or might not be between now and 2100. However, when you look at the pace of nuclear proliferation, and how insane global conflict has gotten after the pandemic, plus the likelihood of additional pandemics which would lead to more global conflict, the virtual impossibility of collaborating internationally to slow down global warming toward a less than +3 deg C goal, these are all hard known things whereas AGI is more speculative ... so, you could apply an additional weight against those known threats of doom, let's say W%, against speculative threats of doom, e.g. Q%, you get:

Nathan's "True" Total Probability of Doom by 2100 = (Nukey-Nuke%)*W + (AGI_doom%)*Q

So now go back and normalize that, and you might find out that AGI_doom is significantly lower than what you thought. E.g. you could work backwards from how likely it is that humans just die out regardless of any cause, and then fill in the unknown variables. So if you think Nukey-Nuke is actually 10%, now you're at 17% chance of annihilation, which might seem too high, so you might adjust your Q.

IsaacKing avatar
Isaacpredicts YES

For rational agents, their future expected probability should be the same as their current probability. No human is particularly close to rational from a Bayesian standpoint, but "price future expected changes into my current probability estimate" is not that complicated a concept, and is something that I expect you're already trying to do. Of course that doesn't mean we should expect you to stay within that range; maybe you're confident something will wildly swing your estimate to be much higher or much lower but you don't know the direction yet.

Realistically though, I think this market should be much higher than it is now.

3 replies
NathanpmYoung avatar
Nathan Youngpredicts YES

@IsaacKing But you can convince me to change my mind.

NathanpmYoung avatar
Nathan Youngpredicts YES

@NathanpmYoung I sense you think I hold access to all the information I might possess in the future, which suggests to me you either think I'm a lot clever/more well-read than I am or you fundamentally misunderstand the possibilities of this market. You can take a position then try and change my mind.

IsaacKing avatar
Isaacpredicts YES

@NathanpmYoung True. If anyone successfully convinces you out of that range, give me a chance to make a counterargument before you resolve the market. :)

PeterWildeford avatar
Peter Wildeford

Over what time period?

IsaacKing avatar
Isaac

What's your current probability on it?

3 replies
NathanpmYoung avatar
Nathan Youngbought Ṁ15 of YES
IsaacKing avatar
Isaac

@NathanpmYoung If you're already in the noted range, then shouldn't this market resolve YES right now?

Preen avatar
Preen

Why? I acknowledge your point above that the expected value of his future probability belief should be equal to his current probability belief, but that doesn't mean there's 0 volatility I.e. there's a >0 chance that his beliefs change, so I don't think it should settle to 1 immediately. (although I also think the annual vol for this belief would probably be small enough that 0.7 seems low)

lu avatar
lu

Risk per year or risk of it happening at all?