If Artificial General Intelligence has an okay outcome, which of these tags will make up the reason?
11
5.9k
2200
28%
Brakes not hit
56%
Brakes hit
54%
Major capability limit in AI
49%
Capability limit in AI
62%
Slow and gradual capability gains in AI
34%
Non-AI tech
36%
Enhancing human minds and/or society
55%
Huge alignment effort
70%
New AI paradigm
7%
Alignment unnecessary
41%
Alignment relatively easy
1.3%
Alignment extra hard
19%
Well-behaved AI with bad ends

An okay outcome is defined in Eliezer Yudkowsky's market as:

An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.

In response to Eliezer Yudkowsky's market, Rob Bensinger organized the scenarios with a set of tags giving them a pretty ontology. This market resolves YES for the tags that are true of the world when AI has an okay outcome.

This is great, but the options are long, dense, and partly overlapping, in a way that made it trickier for me to assign relative probabilities. To help with that, I made this graphic putting the options in order and assigning tags to clusters of related scenarios.

Get Ṁ600 play money
Sort by:
bought Ṁ5 Alignment extra hard YES

Really nice and interesting market :)