Assuming that not developing AGI is not an option and considering that most problems (world-ending scenarios) involving potential AGIs seem to stem from AI-systems being rational agents, should we instead try and develop irrational Superintelligence?
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ16 | |
2 | Ṁ15 | |
3 | Ṁ5 | |
4 | Ṁ4 | |
5 | Ṁ2 |
People are also trading
@EvanDaniel looks self-resolving to me. tbf i fucked up by not making it a poll. i might just refund the money.
I'll make an argument for NO.
The danger of a rational agent AGI is that if you specify the wrong goal, then that goal might be maximized by a plan that kills everybody.
What if the AGI picks a goal at random, and maximizes that? Probably a lot of goals involve killing everybody. The goal of "collect as many stamps as possible" causes it to kill everybody, despite killing not being mentioned in the goal.
What if the AGI acts randomly? You ask it a question, like "how do I cure cancer?" and it responds with "takod k zh iztsn tew." This is safe - but in what sense is it an AGI? There is an inherent tension between making an AI which is good at useful tasks and making an AI which is irrational.
@Nick332 There is no tension, since no existing thing, including humans, is a "rational agent" in the technical sense, and this does not prevent them from doing useful things.