
If Artificial General Intelligence has a poor outcome, what will be the reason?
5
110Ṁ2892030
1H
6H
1D
1W
1M
ALL
79%
Someone finds a solution to alignment, but fails to communicate it before dangerous AI gains control.
78%
Something from Eliezer's list of lethalities occurs.
62%
Someone successfully aligns AI to cause a poor outcome
47%
Alignment is impossible.
Inverse of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence-539844cd3ba1?r=S3JhbnR6.
Will not resolve.
Primarily for users to explore particular lethalities.
Please add responses.
"poor" = human extinction or mass human suffering
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
If Artificial General Intelligence has an okay outcome, what will be the reason?
If Artificial General Intelligence has an okay outcome, what will be the reason?
If AI causes human extinction before 2100, how will it happen
If Artificial General Intelligence has an okay outcome, what will be the reason?
Why will "If Artificial General Intelligence has an okay outcome, what will be the reason?" resolve N/A?
If Artificial General Intelligence (AGI) has an okay outcome, which of these tags will make up the reason?
Who first builds an Artificial General Intelligence?
Will Eliezer's "If Artificial General Intelligence has an okay outcome, what will be the reason?" market resolve N/A?
29% chance
If we survive general artificial intelligence, what will be the reason?
If we survive general artificial intelligence before 2100, what will be the reason?