Which eventuality is more dangerous about AI?
24
56
Never closes
AGI is never achieved, but AI becomes extremely successful at mimicking humans and creating misinformation across all fields of media / multimedia. It becomes publicly and easily accessible by any actor.
AGI is achieved and it helps humanity solve major problems, but there is a risk of misalignment and AI eventually doing things not in the best interests of humankind.

Option 1 risks the integrity of information as AI could generate pervasive disinformation, impacting decision-making, democracy, and social trust. It's a clear and present danger where AI, without being truly intelligent, can manipulate perceptions on a massive scale.

Example: AI-driven fake news could simulate credible media sources, creating false narratives that might lead to political unrest or even international conflict, as well as undermining public trust in essential institutions.

Option 2 presents a potential existential threat where an AGI, though initially aligned with aiding humanity, could diverge from human interests. The risk here is not immediate but the potential harm is greater, with an AGI possibly making autonomous decisions that could adversely affect human civilization on a global scale.

Example: An AGI could, in pursuit of a goal, deplete resources without regard for human needs, or override safety protocols in critical systems, leading to widespread harm if it deems such actions necessary for its objectives.

Get Ṁ200 play money
Sort by:

I feel that the answer to this question will come down to a) how much you discount risk over time and b) how likely you think an AGI is in the near future.

@PaulBenjaminPhotographer I personally think the median estimates on AGI questions here on manifold are reasonable. So we are looking somewhere around late 20s or early 30s. But option 1 assumes that AGI is not possible.

When it comes to at what point AGI will diverge from alignment it’s really hard to tell. Especially if we are talking about ASI. Even understanding it will be very hard for us humans. We might very well be monkeys to ASI.

I think option one is certainly a possibility, certainly with some devastating potential. In the long run though, I think it will be something we overcome and/or manage. I think the threat of true AGI has more existential or far-reaching proportions.

@JesseTate I also believe we will eventually manage and get used to misinformation. Develop immunity to it. The risk is if the damage it leaves will be permanent. Say a world war or nuclear war starts because of that then all bets are off.

More related questions