Question under consideration: When will the first general AI system be devised, tested, and publicly announced?
At the time of writing, this is currently predicted to occur in 2026-2047. But I wonder - it seems like it is hard to operationalize good questions, and this question might be badly operationalized. E.g. a rigid model that is unable to create truly novel capabilities for itself might still count for the purposes of the Metaculus question.
Resolution criteria:
This question resolves once the Metaculus question resolves.
If the AI that caused the Metaculus question to resolve is general enough that I feel comfortable calling it AGI, this question resolves NO. If I feel like it is absurd to call it AGI, then this question resolves YES. If I am uncertain or ambivalent, then I may resolve this question PROB or N/A - and then I will probably go for resolving it to average PROB of a poll of a couple of AI safety researchers I like.
A rough anchor point for whether it is general AI is whether the system has any fundamental limitations that prevents it from engaging in other tasks (like running a company, developing new medications, going to the moon, fighting a war), or whether it is just a question of scale, compute and raw resources before it can do so.