Should we develop irrational AGI?
7
170Ṁ159resolved Jan 1
Resolved
NO1H
6H
1D
1W
1M
ALL
Assuming that not developing AGI is not an option and considering that most problems (world-ending scenarios) involving potential AGIs seem to stem from AI-systems being rational agents, should we instead try and develop irrational Superintelligence?
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ16 | |
2 | Ṁ15 | |
3 | Ṁ5 | |
4 | Ṁ4 | |
5 | Ṁ2 |
People are also trading
Related questions
Will we get AGI before 2027?
12% chance
Will humans create AGI, either directly or indirectly, within the next 24 months?
16% chance
Will we get AGI before 2048?
87% chance
Will we reach "weak AGI" by the end of 2025?
24% chance
Will AI create the first AGI?
41% chance
By when will we have AGI?
Will we have an AGI as smart as a "generally educated human" by the end of 2025?
24% chance
Will we get AGI before 1M humanoid robots are manufactured?
66% chance
Will we get AGI before 2030?
55% chance
Will we get AGI before 2047?
86% chance