The winner will be chosen by a poll at the end of the year.
See also:
I don't think AGI is a useful definition in general. Most definitions here don't capture capabilities or risks.
AI systems can "steal jobs" without being able to fully replace a worker for example. Also it's highly questionable what an average human is capable of.
You can have a system that meets none of the popular AGI definitions but is very capable and concerning both in terms of economic and existential risks.
Subjectively to me AGI is a system that can understand the consequences of it's actions in real world, learn and act as an agent. But it's too vague to be quantifiable.
Here's a neat post I read today. Seems like defining AGI to really mean ASI isn't in line with the original meaning of the term. Having two separate standards for artificial general intelligence and natural general intelligence also doesn't make much sense to me.