An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
This market is a duplicate of https://manifold.markets/IsaacKing/if-we-survive-general-artificial-in with different options. https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=RWxpZXplcll1ZGtvd3NreQ is this same question but with user-submitted answers.
(Please note: It's a known cognitive bias that you can make people assign more probability to one bucket over another, by unpacking one bucket into lots of subcategories, but not the other bucket, and asking people to assign probabilities to everything listed. This is the disjunctive dual of the Multiple Stage Fallacy, whereby you can unpack any outcome into a big list of supposedly necessary conjuncts that you ask people to assign probabilities to, and make the final outcome seem very improbable.
So: That famed fiction writer Eliezer Yudkowsky can rationalize at least 15 different stories (options 'A' through 'O') about how things could maybe possibly turn out okay; and that the option texts don't have enough room to list out all the reasons each story is unlikely; and that you get 15 different chances to be mistaken about how plausible each story sounds; does not mean that Reality will be terribly impressed with how disjunctive the okay outcome bucket has been made to sound. Reality need not actually allocate more total probability into all the okayness disjuncts listed, from out of all the disjunctive bad ends and intervening difficulties not detailed here.)
Related questions

Nobody here knows enough about neurofeedback to consider it at all, which is the simplest explanation for why it ended up at the literal bottom of the list when in reality it is in the top 5.

@ooe133 Oops, this statement is worded in a non-bayesian manner. Correction: not enough people here know enough about...
Is "AGI gains control of the future but is uninterested in killing existing humans or destroying human civilization" option C, none of the above, or a "not okay" outcome?

@MaxMorehead it's pretty close to E, "Whatever strange motivations end up inside an unalignable AGI, or the internal slice through that AGI which codes its successor, they max out at a universe full of cheerful qualia-bearing life and an okay outcome for existing humans."
It's baffling to me that people believe A.) is plausible in any reasonable timespan without AGI, brain-computer interfaces and emulation are so far off. I believe it's possible we'll see greater coordination as described, but I'm gonna go with M.). Perhaps it's just not such a clean catch-22 that you already need aligned AGI to align AGI? If that turns out to be the case, then I'll go with B.)


Unironically betting on F at 1% because of https://slatestarcodex.com/2018/04/01/the-hour-i-first-believed/

I made a duplicate where you are allowed to short sell options you think are unlikely: https://manifold.markets/ChristopherKing/if-artificial-general-intelligence-669e44ca740e

Why would you post this as an image? You made me scroll through Yudkowsky’s anxiety-inducing Twitter timeline to find the source of this in order to find out the context of what he’s talking about.
https://twitter.com/ESYudkowsky/status/1656150555839062017
Spoiler: he’s talking about OpenAI’s attempts to use GPT-4 to interpret and label the neurons in GPT-2.

@AndrewG I like this as a social media post but as a prediction market I am frustrated by its high chance of resolving n/a (20% is a lot) and Manifold's DPM mechanism.
A seems so unlikely... augmenting biological brains with their arbitrary architecture that evolved over millions of years adds so many complexities compared to just sticking with silicon.

@ElliotDavies Don't know if this counts as a steelman- but I might say that augmenting human brains is more difficult and slow and offers less opportunities for scalability and market capture than making a model that does most of what you were wanting the human to do.
Of course, making such a model is fraught with security problems, but cybersecurity has been down the drain for the last ten years anyways (real time software updates anyone?)

Anyone who thinks that AGI is definitely possible should have no problem answering this simple question:

@PatrickDelaney but you could just run it arbitrarily slowly, so there's no lower bound. Also wouldn't you expect power requirements to change as the technology is further developed?
@PatrickDelaney even if better bounds were put on the question by specifying that the AI has to be able to compete with humans on certain timed tasks, the answer most certainly can't be higher than 20 watts as that's about how much energy a human brain consumes

@KabirKumar ones level of confidence should be supported by a highly sophisticated answer rather than brow beating others because they are afraid of looking stupid. It's a super simple question...how much power does your computer use?
@PatrickDelaney Except the question is completely different from the question of if AGI is possible or not.
AGI is definitely possible because we are here.
The consumption of the first implementation will depend on what is the first implementation (it could go from somewhat optimized with almost just the necessary parts for an AGI, to millions of time more than the necessary parts).
We also don’t exactly know what is needed for it, maybe it needs the same level of information processing than a human brain, or maybe there are just some algorithms that are very hard to think of, which require break through in our understanding of what intelligence is, but they will give you AGI, and they could run on a phone.
It will also depend on the hardware we will have at the time.
Related questions










