What'll Be My P(Loss of Control before 2100) In Jan 2026?
3
100Ṁ460
resolved Jan 2
Resolved as
96%

I'll resolve this question to my P(Loss of Control before 2100) at the start of 2026.

Reusable definitions:

"Loss of Control" captures all scenarios where the long-term future does not contain large amounts of value because of humanity.

"Before 2100" adds a time cutoff in case no singularity or extinction events happen until end of century.

Examples and counterexamples:

  1. Asteroid wipes out Earth = Loss of control

  2. Aligned ASI for a transhumanist utopia = No Loss of Control

  3. We create unaligned ASI that kills us and makes the universe worthless by our values = Loss of Control

  4. Unaligned ASI kills us but similar-to-us aliens still do valuable-to-us things in their region of the universe = Loss of Control

  5. Unaligned ASI sells copies of our minds to charitable aliens who gift us nice things = Loss of Control

  6. Humanity makes it to 2100 without ASI regardless of what happens later = No Loss of Control

History:

On 2024-11-06 my aggregate guess, all things considered, was P(Loss of Control before 2100)=96%

Markets for other years:

2026: this market https://manifold.markets/Joern/whatll-be-my-ploss-of-control-befor

2027: https://manifold.markets/Joern/whatll-be-my-ploss-of-control-befor-ZPPPN68tZp

2028: https://manifold.markets/Joern/whatll-be-my-ploss-of-control-befor-QlsSCOsZLL

2029: https://manifold.markets/Joern/whatll-be-my-ploss-of-control-befor-SLgQN5RP9p

2030: https://manifold.markets/Joern/whatll-be-my-ploss-of-control-befor-8dpOh0pEhn

2035: https://manifold.markets/Joern/whatll-be-my-ploss-of-control-befor-LN82gdUEUs

2040: https://manifold.markets/Joern/whatll-be-my-ploss-of-control-befor-p0ZpyLdhpQ

2025: (Relevantly different criteria & question!) https://manifold.markets/Joern/whats-my-pdoom-in-2025

Market context
Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ60
2Ṁ4
Sort by:

I've been too right in 2025 about AI :/

So my hope that there's some relevant error in my model of the future shrank a little bit.

All in all I think 2025 was a year in which we saw that politicians don't pick up the ball, they don't get the problem, e.g. MIRI's book had some impact but not of the escalating kind I'd had held some small but non-vanishing hope for.

I didn't firmly expect to see consistent doubling times for what kind of human task-lengths AI agents can handle (the famous METR plot), so now I'm more confident that even pre-RSI progress is fast. I can vaguely guess some region on the y-axis where RSI becomes doable, but the uncertainty is high, so I'm not sold on ultra-short timelines <1y though I find them at least possible.

There has been a moderately positive update on how much useful work we can get out of AI that's not dangerously competent yet. Anthropic's emergent misalignment is one source of relevance here, protein folding results another. AI automation research and coding agents were neutral/irrelevant to the question.

I continue to not see any strong indicators that politics is unable to realize that we're gonna die soon if nothing changes in a narrow specific way. It feels more like time is running out due to an absence of sparse positive events that I hoped for. The AI regulation preemption laws didn't get passed afaict, but I've also not been too worried. A party leader who worries that their legacy is erased by ASI could undo such a law later.

The fundraiser of MIRI felt a bit embarrassing, I think I overestimated the size and wealth of the community of people who kinda understand AI x-risk enough to know that MIRI is pursuing the best strategy there is.

I was overconfident in 2025 on the topic of animal suffering, but I spent like 100x less effort on object level arguments there than with AI x-risk. I don't quite think effort helps much against overconfidence (i.e. the sigmoid saturates early), and so I have pushed unknown unknowns up more (i.e.: I now guess at slightly higher probably that each gear in my model of AI x-risk could be an illusion and instead a very different mechanism connects my so-far related observations).

So overall: nothing too surprising happened that was outside my model, I mostly just got more details in my model, and since there's less time to the apocalypse left, there's (not proportionally!) less chance for positive miracles. p(control) down by 0.2x. Otoh, I'm more paranoid about overconfidence in a few spots. That effect is mitigated by how in my model loss of control is almost overdetermined, since it is disjunctive not conjunctive. So mayybe p(loss of control) down by 0.02. It's not really productive to think in deltas here, or even in math, and overall my gut feeling kinda stayed at 96%, with most hope still being that I'm just very wrong about everything, followed by a smaller hope that we can get a global ban on superintelligence before it's built.

© Manifold Markets, Inc.TermsPrivacy