People are also trading
It depends on what you mean by AGI.
Current LLMs are much more general and intelligent than we had a few years ago, you no longer need to have a separate system for translation, text summarization and image recognition, you have a system you can just tell what you want it to do in human language and that's totally crazy. So just based on the meaning of the word general we kinda have it now.
But I was introduced to the term AGI in context of AI safety and modern LLMs fail complete in displaying coherent agent behavior. They are currently unable to follow long-term goals and act independently. It makes them much safer and definitely not an AGI. I don't think we'll have a real optimizers-like AGI anytime soon.
Another angle is economic, your AI can have whatever flaws but if it works better than half of your humans they are out of the job. Here it depends on where you set the threshold, but I'd say we'll see a problematic unemployment spike in the next 10 years due to automation.
Some people believe it's come already
https://manifold.markets/SimoneRomeo/have-we-achieved-agi-already?r=U2ltb25lUm9tZW8