
If Artificial General Intelligence has an okay outcome, what will be the reason?
345
17kṀ160k2200
13%
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
11%
AIs will not have utility functions (in the same sense that humans do not), their goals such as they are will be relatively humanlike, and they will be "computerish" and generally weakly motivated compared to humans.
9%
We create a truth economy. https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
8%
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
7%
7%
Eliezer finally listens to Krantz.
6%
Humans become transhuman through other means before AGI happens
4%
Humanity coordinates to prevent the creation of potentially-unsafe AIs.
4%
Power dynamics stay multi-polar. Partly easy copying of SotA performance, bigger projects need high coordination, and moderate takeoff speed. And "military strike on all society" remains an abysmal strategy for practically all entities.
3%
ASI needs not your atoms but information. Humans will live very interesting lives.
3%
Some form of objective morality is true, and any sufficiently intelligent agent automatically becomes benevolent.
3%
AIs never develop coherent goals
2%
Almost all human values are ex post facto rationalizations and enough humans survive to do what they always do
1.8%
Someone creates AGI(s) in a box, and offers to split the universe. They somehow find a way to arrange this so that the AGI(s) cannot manipulate them or pull any tricks, and the AGI(s) give them instructions for safe pivotal acts.
1.7%
Nick Bostrom's idea (Hail Mary) that AI will preserve humans to trade with possible aliens works
1.3%
"Corrigibility" is a bit more mathematically straightforward than was initially presumed, in the sense that we can expect it to occur, and is relatively easy to predict, even under less-than-ideal conditions.
1.2%
AGI is never built (indefinite global moratorium)
1.2%
An AI that is not fully superior to humans launches a failed takeover, and the resulting panic convinces the people of the world to unite to stop any future AI development.
1.2%
A lot of humans participate in a slow scalable oversight-style system, which is pivotally used/solves alignment enough
1.2%
Alignment is unsolvable. AI that cares enough about its goal to destroy humanity is also forced to take it slow trying to align its future self, preventing run-away.
Duplicate of https://manifold.markets/EliezerYudkowsky/if-artificial-general-intelligence with user-submitted answers. An outcome is "okay" if it gets at least 20% of the maximum attainable cosmopolitan value that could've been attained by a positive Singularity (a la full Coherent Extrapolated Volition done correctly), and existing humans don't suffer death or any other awful fates.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
If Artificial General Intelligence (AGI) has an okay outcome, which of these tags will make up the reason?
Will we have an AGI as smart as a "generally educated human" by the end of 2025?
23% chance
Why will "If Artificial General Intelligence has an okay outcome, what will be the reason?" resolve N/A?
If Artificial General Intelligence has a poor outcome, what will be the reason?
Who first builds an Artificial General Intelligence?
Will Artificial General Intelligence (AGI) lead directly to the development of Artificial Superintelligence (ASI)?
76% chance
When artificial general intelligence (AGI) exists, what will be true?
Will artificial general intelligence be achieved they the end of 2025 ?
19% chance
In what year will an AI Lab in China build Artificial General Intelligence?
2031
The probability of extremely good AGI outcomes eg. rapid human flourishing will be >24% in next AI experts survey
59% chance