In a year, will I think that risk of AI apocalypse is between 1 and 10%?
85
513
6.5K
resolved Nov 29
Resolved
YES

Currently I'm at 7% before 2100. That's the 1-10% range here.

Nov 15, 4:36pm: In a year or less, will I think that risk of AI apocalypse is between 1 and 10%? → In a year, will I think that risk of AI apocalypse is between 1 and 10%?

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ1,455
2Ṁ455
3Ṁ210
4Ṁ153
5Ṁ138
Sort by:
predicted YES

Can this resolve?

predicted YES

@NathanpmYoung Time to resolve the market?

bought Ṁ500 of YES

Betting thus up to reflect that his view had only changed by one percentage point, at the bottom (2-10%), as of late August. Further, he said that he does not think pushing for further government regulation of AI is likely to be good on net as of early this month.

Aug: https://twitter.com/NathanpmYoung/status/1695726726670643273#m

Oct: https://twitter.com/NathanpmYoung/status/1709130125948657783?s=20

Short of something like Gemini releasing, awing us all, and him updating quickly, I do not see this changing.

Checking: this resolves NO if, at close, you think it is 10.01% likely that AI kills all humans by 2100? So this is a double-sided market where I might bet yes if I think you will decide it is much less likely or if you think it is more likely?

Working from the Fermi Paradox perspective, nuclear apocalypse (little ol' Nukey-Nuke) is being time bound rated based upon percentage points per century. For example, there's a 1% risk of Nukey-Nuke before 2100, and perhaps a 1% risk of Nukey-Nuke between 2100 and 2200, etc. I'm not even going to attempt to define the Bayesian inputs and outputs of this system because I really reject online Bayesian posturing, it's often a misapplication of probability to sound smart and win an argument, so we're just going to hand wave and say we can build a stupid model that's cumulative, Nukey-Nuke by 2300 would be equal to 1%+1%+1% = 3%.

Likewise, it's reasonable to believe that even if AGI is in the realm of just outside of fantasy, you can at least put a sliding window over a century to create an estimate for when AGI destruction occurs, e.g. perhaps it's also 1% before 2100, or I don't know...maybe it's more like, 1% before the year 2300.

So it's reasonable to believe that there is a chance of complete human destruction by AGI within a certain timeframe - no knock on that, at all. I'm not going to attempt to reason what that percentage might or might not be between now and 2100. However, when you look at the pace of nuclear proliferation, and how insane global conflict has gotten after the pandemic, plus the likelihood of additional pandemics which would lead to more global conflict, the virtual impossibility of collaborating internationally to slow down global warming toward a less than +3 deg C goal, these are all hard known things whereas AGI is more speculative ... so, you could apply an additional weight against those known threats of doom, let's say W%, against speculative threats of doom, e.g. Q%, you get:

Nathan's "True" Total Probability of Doom by 2100 = (Nukey-Nuke%)*W + (AGI_doom%)*Q

So now go back and normalize that, and you might find out that AGI_doom is significantly lower than what you thought. E.g. you could work backwards from how likely it is that humans just die out regardless of any cause, and then fill in the unknown variables. So if you think Nukey-Nuke is actually 10%, now you're at 17% chance of annihilation, which might seem too high, so you might adjust your Q.

predicted YES

For rational agents, their future expected probability should be the same as their current probability. No human is particularly close to rational from a Bayesian standpoint, but "price future expected changes into my current probability estimate" is not that complicated a concept, and is something that I expect you're already trying to do. Of course that doesn't mean we should expect you to stay within that range; maybe you're confident something will wildly swing your estimate to be much higher or much lower but you don't know the direction yet.

Realistically though, I think this market should be much higher than it is now.

predicted YES

@IsaacKing But you can convince me to change my mind.

predicted YES

@NathanpmYoung I sense you think I hold access to all the information I might possess in the future, which suggests to me you either think I'm a lot clever/more well-read than I am or you fundamentally misunderstand the possibilities of this market. You can take a position then try and change my mind.

predicted YES

@NathanpmYoung True. If anyone successfully convinces you out of that range, give me a chance to make a counterargument before you resolve the market. :)

Over what time period?

What's your current probability on it?

bought Ṁ15 of YES

@NathanpmYoung If you're already in the noted range, then shouldn't this market resolve YES right now?

Why? I acknowledge your point above that the expected value of his future probability belief should be equal to his current probability belief, but that doesn't mean there's 0 volatility I.e. there's a >0 chance that his beliefs change, so I don't think it should settle to 1 immediately. (although I also think the annual vol for this belief would probably be small enough that 0.7 seems low)

Risk per year or risk of it happening at all?

More related questions