1
Will AI xrisk seem to be handled seriously by the end of 2026?
12
closes 2026
28%
chance

Lately it seems like there's been a bunch of people taking big AI risks seriously, from OpenAI's Governance of superintelligence to Deepmind's Model evaluation for extreme risks.

We're not quite there yet in terms of my standards for safety, and even less there in terms of e.g. Eliezer Yudkowsky's. However I wonder if this marks a turning point.

Obviously this is a very subjective question, so I am warning you ahead of time that it is going to resolve in opinionated ways. Trying to anchor the discussion, I expect the following to be necessary for a YES resolution:

  • Major leading AI companies openly acknowledge that existential risk is a possibility, not just in a marginal sense (e.g. internal discussion of it by employees, rare cases where the leaders begrudgingly admit it) but also in a central sense (e.g. openly having research teams working on it, having long sections of documents for politicians discussing it).

  • Something is done to handle unilateral actors, e.g. there is active progress made in creating an international organization which can prevent unilateral actors from creating unfriendly AI, or maybe somehow the only plausible creators all take AI xrisk seriously.

  • Yann LeCun changes his mind to take AI xrisk seriously or no longer holds much sway about it at Facebook.

  • The lessons of Worlds Where Iterative Design Fails are taken seriously by the above systems.

Please ask me more questions in the comments to help cement the resolution criteria. If my opinion on the inherent danger of AI xrisk changes during the resolution period, I will try to respond based on the level of risk implied by my criteria, not based on my later evaluation of things.

However, if it turns out that there is a similarly powerful way of handling AI xrisk which is qualitatively different, which gets implemented in practice, I will also resolve this question to YES.

Sort by:
Gigacasting avatar
Gigacasting (edited)

*

Gigacasting avatar
Gigacasting (edited)

*

JacobPfau avatar
Jacob Pfau

By your standards are bio-risks handled seriously? Are nuclear weapons?

tailcalled avatar
tailcalled

@JacobPfau I'm not sure this is a meaningful comparison (the risk profiles seem different, especially for nuclear weapons where there isn't really a legitimate non-destructive purpose) and I'm not sufficiently familiar with the subject matter or technical details to say.

Related markets

Will there have been a noticeable sector-wide economic effect from a new AI technology by the end of 2023?45%
Will AI have a trillion+ dollar impact by the end of 2025?39%
Will A.I. Be Able to Make Significantly Better, "Common Sense Judgements About What Happens Next," by the End of 2023?53%
Will AI be able to accurately do my taxes by EOY 2026?61%
Will some U.S. investment bankers be negatively affected financially due to AI by end of 2025?37%
Will AI have a sudden trillion+ dollar impact by the end of 2023?3%
Will AI agents be used to develop software commercially by the end of 2023?64%
Will any AI cause an international incident before August 2024?38%
Will risk from misaligned AI be popularly perceived to be greater at close (2027) than as of market creation (2022)?88%
Before 2028, will there be a major self-improving AI policy*?75%
By the end of 2023, will Richard Hanania begin spending a significant amount of time working on AI risk?25%
Will AI Impacts publish another Expert Survey on Progress in AI by the end of 2025?17%
Will AI be able to fully generate excel-based, 3-statement financial valuation models on a public stock by end of 2026?68%
Will the EU AI Act enter into force by end of 2024?67%
Will the US establish a clear AI developer liability framework for AI harms by 2028?43%
Will some U.S. consultants be negatively affected financially due to AI by end of 2025?55%
Will some U.S. doctors be negatively affected financially due to AI by end of 2025?30%
Will Tyler Cowen agree that an 'actual mathematical model' for AI X-Risk has been developed by October 15, 2023?15%
Will A.I. Get Significantly Better at Evaluating Scientific Claims by the end of 2024?63%
Will there be a funding commitment of at least $1 billion in 2023 to a program for mitigating AI risk?26%