Skip to main content
MANIFOLD
If AI has an okay outcome and there was no special effort, was it because alignment was easy?
1
Ṁ100Ṁ200
2031
90%
chance

Resolves N/A if AI does not have an ok outcome, or if AI has an ok outcome because of a pause, human enhancement, a non-routine effort prioritizing alignment as a blocker on capabilities rather than happening alongside capabilities as a second priority, or some non-AI route to a transhumanist future.

For example, if OpenAI or Anthropic's strategy worked, this market would not resolve N/A; they both have a "safety team" in a capabilities org, rather than a capabilities team in an alignment org, and are not pursuing alternate paths for a transhumanist future.

Otherwise, resolves YES if alignment is easy, and NO otherwise. "Alignment is easy" means 6 or easier on the ten levels of alignment difficulty.

Market context
Get
Ṁ1,000
to start trading!