If AI has an okay outcome and there was no special effort, was it because alignment was easy?
1
Ṁ100Ṁ2002031
90%
chance
1H
6H
1D
1W
1M
ALL
Resolves N/A if AI does not have an ok outcome, or if AI has an ok outcome because of a pause, human enhancement, a non-routine effort prioritizing alignment as a blocker on capabilities rather than happening alongside capabilities as a second priority, or some non-AI route to a transhumanist future.
For example, if OpenAI or Anthropic's strategy worked, this market would not resolve N/A; they both have a "safety team" in a capabilities org, rather than a capabilities team in an alignment org, and are not pursuing alternate paths for a transhumanist future.
Otherwise, resolves YES if alignment is easy, and NO otherwise. "Alignment is easy" means 6 or easier on the ten levels of alignment difficulty.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!