If Artificial General Intelligence has an okay outcome, which of these tags will make up the reason?
💎
Premium
13
Ṁ7512
2200
63%
Slow and gradual capability gains in AI
62%
Brakes hit
55%
Huge alignment effort
51%
New AI paradigm
49%
Capability limit in AI
42%
Major capability limit in AI
36%
Enhancing human minds and/or society
36%
Alignment relatively easy
34%
Non-AI tech
28%
Brakes not hit
19%
Well-behaved AI with bad ends
16%
Alignment unnecessary
1%
Alignment extra hard

An okay outcome is defined in Eliezer Yudkowsky's market as:

An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.

In response to Eliezer Yudkowsky's market, Rob Bensinger organized the scenarios with a set of tags giving them a pretty ontology. This market resolves YES for the tags that are true of the world when AI has an okay outcome.

This is great, but the options are long, dense, and partly overlapping, in a way that made it trickier for me to assign relative probabilities. To help with that, I made this graphic putting the options in order and assigning tags to clusters of related scenarios.

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ5 Alignment extra hard YES

Really nice and interesting market :)

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules