
Resolves per my subjective judgment at the close time, according to the definitions given in Superintelligent AI is necessary for an amazing future, but far from sufficient:
Strong Utopia: At least 95% of the future’s potential value is realized.
Weak Utopia: We lose 5+% of the future’s value, but the outcome is still at least as good as “tiling our universe-shard with computronium that we use to run glorious merely-human civilizations, where people's lives have more guardrails and more satisfying narrative arcs that lead to them more fully becoming themselves and realizing their potential (in some way that isn't railroaded), and there's a far lower rate of bad things happening for no reason”.
Pretty Good: The outcome is worse than Weak Utopia, but at least as good as “tiling our universe-shard with computronium that we use to run lives around as good and meaningful as a typical fairly-happy circa-2022 human”.
Conscious Meh: The outcome is worse than the “Pretty Good” scenario, but isn’t worse than an empty universe-shard. Also, there’s a lot of conscious experience in the future.
Unconscious Meh: Same as “Conscious Meh”, except there’s little or no conscious experience in our universe-shard’s future. E.g., our universe-shard is tiled with tiny molecular squiggles (a.k.a. “molecular paperclips”).
Weak Dystopia: The outcome is worse than an empty universe-shard, but falls short of “Strong Dystopia”.
Strong Dystopia: The outcome is about as bad as physically possible.
If the close time is reached and I think it's too soon to tell, I will extend the close time if I think that more time will help clear up the question, or resolve according to my probability distribution at the time.