Will a misaligned AGI take over the world?
➕
Plus
38
Ṁ4292
2123
11%
chance

By misaligned I mean an AGI that isn't aligned with the ACTUAL values of most humans.

And by taking over the world I mean the news would say so.
If you have recomendations that would make this description more useful I'm open to hear them.

Updated twice 08/02/2023: The definition of misaligned was to not being aligned with a single human and then was to not being aligned with any set of humans.

Feb 8, 12:42am: Will a misaligned AI take over the world? → Will a misaligned AGI take over the world?

Get
Ṁ1,000
and
S3.00
Sort by:

Most humans have pretty dubious values.

Let’s say it proves to be too vulnerable to an EMP, and therefore requires humans to stand around as a “backup”. Would that count as a “No”?

predictedYES

@Meta_C That's a yes, I think. Taking over the world does not require genocide.

  • What if the AI is aligned to the values of some company/ideology/other subset of humanity, but not any singular human?

  • What if a human controls an AI well enough to take over the world, but this causes irreversible actions that even they regret, and the world ends up dystopic?

predictedYES

@ThomasKwa What if the AI convinces the AI's creator that that is what they want even if the AI was initially "misaligned"?

predictedYES
predictedYES

fuck I ctrl+enter instead of shift+enter
for the first point: you are right, I want to consider that alignment so I changed the description. I also deleted the "intent alignment" part because I don't actually know the consensus of the term.
for the second point: if they regret it I wouldn't consider it properly aligned.

predictedYES

@patodesu Doesn't it make sense that the AI would modify the human's brain? Everyone regrets things sometimes so what is the criteria? Eg. they write an article saying they regret it or what?

predictedYES

@ZZZZZZ First question: Not necessarily, but I will change the description so that will not be a problem anymore.
Second question: I consider that to actually align an AI means to give it objectives that you will not regret giving it. The thing is, I don't know how the criteria would be to know that and now that I think about it, what I really care is the chances of align it with "human values", AKA the most ambitous definition of alignment.
So sorry but I'll change the description again.

What if the news use a phrase along the lines of "AI has taken over the world", but what they really mean is something like "people are regularly using ChatGPT to assist with their work or look up facts"?

@tailcalled well that wouldn't be misaligned i think. Just in case in the description I also put AGI

What if it takes over the world but destroys the news in the process?

@tailcalled yeah, it could happen maybe. So what should i put? No clarification at all?

@patodesu Dunno

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules