Will prioritizing corrigible AI produce safe results?
4
1kṀ1702050
45%
chance
1H
6H
1D
1W
1M
ALL
This market is conditional on the market "Will the company that produces the first AGI have prioritized Corrigibility?" (https://manifold.markets/PeterMcCluskey/will-the-company-that-produces-the). This market will resolve as N/A if that market resolves as NO or N/A.
If that market resolves as YES, this market will resolve one year later, to the same result that the market "Will AGI create a consensus among experts on how to safely increase AI capabilities?" (https://manifold.markets/PeterMcCluskey/will-agi-create-a-consensus-among-e) is resolved as.
I will not trade in this market.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
People are also trading
Related questions
Will AGI create a consensus among experts on how to safely increase AI capabilities?
35% chance
By 2027 will there be a well-accepted training procedure(s) for making AI honest?
15% chance
Will AI be considered safe in 2030? (resolves to poll)
72% chance
AI honesty #2: by 2027 will we have a reasonable outer alignment procedure for training honest AI?
25% chance
Conditional on AI alignment being solved, will governments or other entities be capable of enforcing use of aligned AIs?
37% chance
Will AI be capable of fast self improvement before physical AI robots are massively used for improving AI capabilities?
87% chance
Before 2028, will there be a major self-improving AI policy*?
74% chance
Will AGI be a problem before non-G AI?
20% chance
[Carlini questions] Most improvements in best AI systems direct result of the prior generation of AI systems
[Carlini questions] Will people regularly trust AI systems to "know best"