
Option 1 risks the integrity of information as AI could generate pervasive disinformation, impacting decision-making, democracy, and social trust. It's a clear and present danger where AI, without being truly intelligent, can manipulate perceptions on a massive scale.
Example: AI-driven fake news could simulate credible media sources, creating false narratives that might lead to political unrest or even international conflict, as well as undermining public trust in essential institutions.
Option 2 presents a potential existential threat where an AGI, though initially aligned with aiding humanity, could diverge from human interests. The risk here is not immediate but the potential harm is greater, with an AGI possibly making autonomous decisions that could adversely affect human civilization on a global scale.
Example: An AGI could, in pursuit of a goal, deplete resources without regard for human needs, or override safety protocols in critical systems, leading to widespread harm if it deems such actions necessary for its objectives.
@PaulBenjaminPhotographer I personally think the median estimates on AGI questions here on manifold are reasonable. So we are looking somewhere around late 20s or early 30s. But option 1 assumes that AGI is not possible.
When it comes to at what point AGI will diverge from alignment it’s really hard to tell. Especially if we are talking about ASI. Even understanding it will be very hard for us humans. We might very well be monkeys to ASI.
@JesseTate I also believe we will eventually manage and get used to misinformation. Develop immunity to it. The risk is if the damage it leaves will be permanent. Say a world war or nuclear war starts because of that then all bets are off.