/EliezerYudkowsky/if-artificial-general-intelligence
(Note that this is about the multiple-choice market where Eliezer chose the answers, not the free response one where anyone can add their own.)
The base rate for complex long term markets resolving N/A seems high. If it resolves at all in the next twenty or so years I would expect that resolution to be N/A overwhelmingly so this feels like an easy bet. "Yes" plausibly pays out way earlier so even if you feel like "No" is more likely, opportunity cost makes it less comparatively less valuable.
@MartinRandall The description doesn't include any mention of an N/A resolution, and I think it implies that one of the given options will be the resolution.
@IsaacKing huh. What would it resolve to if we don't have an okay outcome? The answer that "could have worked" in the creator's judgement? Generally I assume that "if X" markets resolve n/a if not X.
@MartinRandall Oh that's true. I suppose there are some outcomes that are not okay yet still result in the market resolving. But I ascribe those extremely low probability.
@MartinRandall To my understanding, Eliezer's definition of "okay" includes anything better than everybody being dead or tortured for eternity. So for that market's condition to not be met requires some contrived scenario wherein human quality of life is much worse than it is now, yet Manifold Markets still exists and people care about earning mana.
@IsaacKing For a resolution we have to get > 20% of max attainable value. So if we all decide that AI is too dangerous and don't convert the lightcone to hedonism, it resolves N/A.
An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity