Should we develop irrational AGI?
7
170Ṁ159
resolved Jan 1
Resolved
NO

Assuming that not developing AGI is not an option and considering that most problems (world-ending scenarios) involving potential AGIs seem to stem from AI-systems being rational agents, should we instead try and develop irrational Superintelligence?

Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ16
2Ṁ15
3Ṁ5
4Ṁ4
5Ṁ2
Sort by:

It seems that we should, in fact, not develop irrational AGI.

How will this market be resolved?

@EvanDaniel looks self-resolving to me. tbf i fucked up by not making it a poll. i might just refund the money.

@Symmetry just resolve it to a poll result

@nikki I am in favor of finding ways to make markets not be self-resolving :)

I'll make an argument for NO.

The danger of a rational agent AGI is that if you specify the wrong goal, then that goal might be maximized by a plan that kills everybody.

What if the AGI picks a goal at random, and maximizes that? Probably a lot of goals involve killing everybody. The goal of "collect as many stamps as possible" causes it to kill everybody, despite killing not being mentioned in the goal.

What if the AGI acts randomly? You ask it a question, like "how do I cure cancer?" and it responds with "takod k zh iztsn tew." This is safe - but in what sense is it an AGI? There is an inherent tension between making an AI which is good at useful tasks and making an AI which is irrational.

@Nick332 There is no tension, since no existing thing, including humans, is a "rational agent" in the technical sense, and this does not prevent them from doing useful things.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules