What is the fate of our universe-shard?
13
140
2100
13%
Strong Utopia
16%
Weak Utopia
13%
Pretty Good
4%
Conscious Meh
50%
Unconscious Meh
3%
Weak Dystopia
0.3%
Strong Dystopia

Resolves per my subjective judgment at the close time, according to the definitions given in Superintelligent AI is necessary for an amazing future, but far from sufficient:

  • Strong Utopia:  At least 95% of the future’s potential value is realized.

  • Weak Utopia:  We lose 5+% of the future’s value, but the outcome is still at least as good as “tiling our universe-shard with computronium that we use to run glorious merely-human civilizations, where people's lives have more guardrails and more satisfying narrative arcs that lead to them more fully becoming themselves and realizing their potential (in some way that isn't railroaded), and there's a far lower rate of bad things happening for no reason”.

  • Pretty Good:  The outcome is worse than Weak Utopia, but at least as good as “tiling our universe-shard with computronium that we use to run lives around as good and meaningful as a typical fairly-happy circa-2022 human”.

  • Conscious Meh:  The outcome is worse than the “Pretty Good” scenario, but isn’t worse than an empty universe-shard. Also, there’s a lot of conscious experience in the future.

  • Unconscious Meh:  Same as “Conscious Meh”, except there’s little or no conscious experience in our universe-shard’s future. E.g., our universe-shard is tiled with tiny molecular squiggles (a.k.a. “molecular paperclips”).

  • Weak Dystopia:  The outcome is worse than an empty universe-shard, but falls short of “Strong Dystopia”.

  • Strong Dystopia:  The outcome is about as bad as physically possible.

If the close time is reached and I think it's too soon to tell, I will extend the close time if I think that more time will help clear up the question, or resolve according to my probability distribution at the time.

Get Ṁ200 play money
Sort by:

This is more Unconscious Meh than I was expecting people to go for (though I don’t disagree on the object level); what’s the model? Unaligned AGI? Humans going extinct for unrelated reasons in the next million years, and there are no aliens or unconscious aliens (including unaligned alien AGI)?

@Tetraspace My assumptions: no intelligent life other than us, life & intelligence are both evolutionary bottlenecks. We will fail creating a true AGI. We won't have a self-sustainable human base other than Earth. Finally, some silly "disaster" happens and our Earth-bound civilization goes to the drain. Time is up anyway for all conscious Earth species in 150-1000 million years. Then come 22 billion uninteresting years.

@AndreiGavrea How could we fail creating true AGI? I could imagine 10% AGI kills all but the other 90% are looking pretty nice in my view.

Manifold in the wild: A Tweet by tetraspace 💎

Simultaneously my silliest and most serious market https://manifold.markets/Tetraspace/what-is-the-fate-of-our-universesha