![](/_next/image?url=https%3A%2F%2Ffirebasestorage.googleapis.com%2Fv0%2Fb%2Fmantic-markets.appspot.com%2Fo%2Fdream%252FJ6Uh9vtLwP.png%3Falt%3Dmedia%26token%3Dca58bede-d765-4b36-b4e5-f0485be6373e&w=3840&q=75)
This market will be resolved when a recursively self-improving AI outpowers the combined might of human civilization. At least, I'll try my best to close it.
Dec 7, 6:04pm: How many years until the singularity? → How many years until the singularity? (Since December 2022)
Related questions
Some quick thoughts.
Does the resolution require a runaway singularity? How does it resolve in a “powerful, but passive AI” scenario where human operators retain high-level control? In particular, limiting its ability to willy-nilly remove its safeguards – motivations (e.g. modify read-only ethics module) and access to modalities (i.e. escape sandboxing), etc. In simple terms, if it ends up essentially being a superhuman pet, it would not really be the kind of qualitative, unpredictable, existential shift that I associate with the “singularity” term, but more like a large but still quantitative improvement on Google and other “extended mind” tools.
In contrast, I would say the other edge cases – e.g. the “powerful and unshackled, but benevolent” or “powerful and notionally shackled, but dangerously misspecified” scenarios – would qualify as a singularity because they include significant loss of control. And what I consider to be the prototypical scenario is someone basically unleashing a fully autonomous and capable system into the world with seriously inadequate safeguards or none at all.