*Note: The above risk question is not limited to x-risk but considering non-x-risks (you might call it y-risk).
**Reeled in meaning that a scenario unfolds that does not reach x-risk due to various dynamics or/and buffers against total collapse (e.g., machines and AI fail to reach sufficient planning or survival skills to navigate the vast non-linear dynamics needed in a non-superintelligence scenario and when in crisis humans tend to find novel solutions - just one possibility).
My question predicates on the possibilities that true AGI and superintelligence may not be within reach this century, that specialisation and excess trust in systems can enable cascading scenarios that remain fundamentally or very likely buffered from wiping out humanity.