Will AI xrisk seem to be handled seriously by the end of 2026?
Standard
39
Ṁ5137
2026
26%
chance

Lately it seems like there's been a bunch of people taking big AI risks seriously, from OpenAI's Governance of superintelligence to Deepmind's Model evaluation for extreme risks.

We're not quite there yet in terms of my standards for safety, and even less there in terms of e.g. Eliezer Yudkowsky's. However I wonder if this marks a turning point.

Obviously this is a very subjective question, so I am warning you ahead of time that it is going to resolve in opinionated ways. Trying to anchor the discussion, I expect the following to be necessary for a YES resolution:

  • Major leading AI companies openly acknowledge that existential risk is a possibility, not just in a marginal sense (e.g. internal discussion of it by employees, rare cases where the leaders begrudgingly admit it) but also in a central sense (e.g. openly having research teams working on it, having long sections of documents for politicians discussing it).

  • Something is done to handle unilateral actors, e.g. there is active progress made in creating an international organization which can prevent unilateral actors from creating unfriendly AI, or maybe somehow the only plausible creators all take AI xrisk seriously.

  • Yann LeCun changes his mind to take AI xrisk seriously or no longer holds much sway about it at Facebook.

  • The lessons of Worlds Where Iterative Design Fails are taken seriously by the above systems.

Please ask me more questions in the comments to help cement the resolution criteria. If my opinion on the inherent danger of AI xrisk changes during the resolution period, I will try to respond based on the level of risk implied by my criteria, not based on my later evaluation of things.

However, if it turns out that there is a similarly powerful way of handling AI xrisk which is qualitatively different, which gets implemented in practice, I will also resolve this question to YES.

Get
Ṁ1,000
and
S1.00
Sort by:

Can you give any examples of what would be sufficient for a YES resolution? If all your bullet points in the description are satisfied, are we getting close to a YES resolution?
And some more questions about necessary conditions (or conditions that must almost certainly be satisfied for this market to resolve YES):
* Is it necessary that at least one AI safety researcher thinks that AI xrisk is handled seriously?
* Is it necessary that at least 50% of AI safety researchers think that AI xrisk is handled seriously?
* Is it necessary that Eliezer Yudkowsky thinks that AI xrisk is handled seriously? (from the description I infer that this one is "no", correct?)

If all your bullet points in the description are satisfied, are we getting close to a YES resolution?

@FlorisvanDoorn I tried to write the bullet points in such a way that I'd think it would resolve YES if they are satisfied; however I imagine that there could be things I haven't thought of yet which should obviously be taken into account in retrospect, or that there are ways for the criteria to be technically satisfied in a less impactful way than I think.

* Is it necessary that at least one AI safety researcher thinks that AI xrisk is handled seriously?

* Is it necessary that at least 50% of AI safety researchers think that AI xrisk is handled seriously?"

If AI safety researchers think that it is not enough, I will try to get them to give me some arguments so I can evaluate it myself. If they point out severe gaps in my criteria, the market will resolve NO. If they convince me that AI xrisk is worse than I currently think (e.g. that gradient hacking is a real concern, or that RSI is a concern even at small scales), then by the "I will try to respond based on the level of risk implied by my criteria, not based on my later evaluation of things" criterion the market will resolve YES.

* Is it necessary that Eliezer Yudkowsky thinks that AI xrisk is handled seriously? (from the description I infer that this one is "no", correct?)

Your inference is correct.

*

*

By your standards are bio-risks handled seriously? Are nuclear weapons?

@JacobPfau I'm not sure this is a meaningful comparison (the risk profiles seem different, especially for nuclear weapons where there isn't really a legitimate non-destructive purpose) and I'm not sufficiently familiar with the subject matter or technical details to say.