For the purposes of this market, a "fire alarm" will be any event that substantially increases the AI capabilities research community's concern about AI. It would cause a large fraction of prominent AI risk skeptics to either change their mind or leave their current positions as the field shifts out from under them.
(Getting widespread mainstream publicity and increasing the general population's concern about AI is not necessary, though I think it's fairly likely to occur for anything that would satisfy the above criterion.)
A gradual realization over the course of several years doesn't count; it must be a small number of prominent events.
For example, if something causes Meta to change their current stance of "I don't understand the risks, therefore they don't exist", that would probably count. (It couldn't be something that only affects Meta, like a single more safety-conscious person being put in charge.)
This resolution criterion is obviously very vague, and I'm open to suggestions for more concrete operationalizations.
Scott Alexander seems to think the fire alarm has happened. I'm not convinced.
https://astralcodexten.substack.com/p/links-for-april-2023 (#20)