If AI has an okay outcome and there was no special effort, was it because alignment was easy?
4
Ṁ100Ṁ3582031
75%
chance
1H
6H
1D
1W
1M
ALL
Resolves N/A if AI does not have an ok outcome, or if AI has an ok outcome because of a pause, human enhancement, a non-routine effort prioritizing alignment as a blocker on capabilities rather than happening alongside capabilities as a second priority, or some non-AI route to a transhumanist future.
For example, if OpenAI or Anthropic's strategy worked, this market would not resolve N/A; they both have a "safety team" in a capabilities org, rather than a capabilities team in an alignment org, and are not pursuing alternate paths for a transhumanist future.
Otherwise, resolves YES if alignment is easy, and NO otherwise. "Alignment is easy" means 6 or easier on the ten levels of alignment difficulty.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
If AI has an okay outcome because of a huge alignment effort, where did AI progress stall out?
If AI has an okay outcome, was it because of humanity doing something beyond business-as-usual?
65% chance
Is AI alignment computable?
50% chance
If a huge alignment effort is part of the reason for AI having an okay outcome, will it involve a new AI paradigm?
60% chance
How difficult will Anthropic say the AI alignment problem is?
Will Inner or Outer AI alignment be considered "mostly solved" first?
Conditional on AI alignment being solved, will governments or other entities be capable of enforcing use of aligned AIs?
37% chance
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?