Will there be a fire alarm for AGI by the end of 2027?

For the purposes of this market, a "fire alarm" will be any event that substantially increases the AI capabilities research community's concern about AI. It would cause a large fraction of prominent AI risk skeptics to either change their mind or leave their current positions as the field shifts out from under them.

(Getting widespread mainstream publicity and increasing the general population's concern about AI is not necessary, though I think it's fairly likely to occur for anything that would satisfy the above criterion.)

A gradual realization over the course of several years doesn't count; it must be a small number of prominent events.

For example, if something causes Meta to change their current stance of "I don't understand the risks, therefore they don't exist", that would probably count. (It couldn't be something that only affects Meta, like a single more safety-conscious person being put in charge.)

This resolution criterion is obviously very vague, and I'm open to suggestions for more concrete operationalizations.

Sort by:
IsaacKing avatar
Isaac Kingis predicting NO at 40%

How do people feel about Bing chat? It's certainly led to a lot of public talk about AI doom. Doesn't seem to have shifted research directions all that much though.

johnleoks avatar
Comment hidden