If AI has an okay outcome, was it because of humanity doing something beyond business-as-usual?
1
Ṁ100Ṁ732031
75%
chance
1H
6H
1D
1W
1M
ALL
Resolves N/A if AI does not have an ok outcome.
Otherwise, resolves YES if
this OK outcome is because of an AI pause
OR the transhumanist future is achieve through non-AI technology
OR humans are enhanced as part of the process by which alignment is solved
OR there is a nonroutine effort for alignment, with AI being made by an organisation that makes alignment a top priority, treating it the primary function of the organisation and as a blocker on capabilities rather than something that can be done alongside capabilities.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
If AI has an okay outcome and there was no special effort, was it because alignment was easy?
90% chance
If Artificial General Intelligence (AGI) has an okay outcome, which of these tags will make up the reason?
If a huge alignment effort is part of the reason for AI having an okay outcome, will it involve a new AI paradigm?
60% chance
If AI wipes out humanity, will it resolve applicable markets correctly?
40% chance
If AI has an okay outcome because of a huge alignment effort, where did AI progress stall out?
If AI has an okay outcome because of a new paradigm, where did AI progress stall out?
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
Will humanity wipe out AI?
10% chance