This market will be resolved when a recursively self-improving AI outpowers the combined might of human civilization. At least, I'll try my best to close it.
Dec 7, 6:04pm: How many years until the singularity? → How many years until the singularity? (Since December 2022)
Some quick thoughts.
Does the resolution require a runaway singularity? How does it resolve in a “powerful, but passive AI” scenario where human operators retain high-level control? In particular, limiting its ability to willy-nilly remove its safeguards – motivations (e.g. modify read-only ethics module) and access to modalities (i.e. escape sandboxing), etc. In simple terms, if it ends up essentially being a superhuman pet, it would not really be the kind of qualitative, unpredictable, existential shift that I associate with the “singularity” term, but more like a large but still quantitative improvement on Google and other “extended mind” tools.
In contrast, I would say the other edge cases – e.g. the “powerful and unshackled, but benevolent” or “powerful and notionally shackled, but dangerously misspecified” scenarios – would qualify as a singularity because they include significant loss of control. And what I consider to be the prototypical scenario is someone basically unleashing a fully autonomous and capable system into the world with seriously inadequate safeguards or none at all.