I'm looking for a very short + accessible written explainer on why AI x-risk is plausible. The target audience is my smart, but non-technical coworkers.
Things I don't like with explainers I've found:
- https://www.safe.ai/ai-risk: Jumps too deep - more of an overview rather than a pitch/explainer.
- https://medium.com/@NotesOnAIAlignment/a-gentle-introduction-to-why-ai-might-end-the-human-race-4670f4b5cdec: Very good level of accessibility, but way too long
- https://muddyclothes.substack.com/p/a-simple-explanation-of-why-advanced: Pretty good actually, but I'd like even shorter / a summary upfront.
Related questions
I'm not sure if they fit your criteria for length and accessibility, but personally I was first convinced by these posts:
https://www.reddit.com/r/ChatGPT/s/9DrSBpmIdl
This chose two outcomes. There's a lot more. But it got the point, that once it gets smarter than us and we give it all the power, either by mistake or on purpose, actively or passively, it can end up marginalising us without our ability to switch it off without catastrophe, so the existential hazard is either from the AI actions itself or of the consequences of switching it off.
Everything is is all sci-fi grey goo nanobots and large language models making superebola. Speculative. Xrisk comes from some form of societal sleep walking into a bad situation. That's the simplest pitch.