
Resolution Criteria
Resolves to the majority result of a YES/NO poll of Manifold users at the end of 2025 for the question, "Is slowing down AGI good for AI safety?"
Explanation
Sam Altman and others at OpenAI argue that the safest quardant of a two-by-two matrix of AGI happening sooner vs later and slowly vs quickly is soon and slow. Other people such as Katja Grace argue that we should try to slow down AI, or at least that we should think about doing so. For this question, I take slowing down to mean things like reducing financial investment in AGI-oriented research, taking more time to release AGI-oriented research, and not taking jobs where one primarily works to increase AI capabilities.
There are many arguments for slowing down and for not slowing down AI, and I may add some of them to this market description over time—attempting to do so evenly for both sides.