I see a lot of questions about AGI which seem to really hinge a lot on the definition of AGI. They tend to be long-term, so maybe it doesn't really matter that much, but I think there is a huge difference between "slightly better ChatGPT" and "rapidly self-improving super intelligence".
I thought for ages "a machine that can do anything a human can do" was the definition. So I didn't pay any attention to the idea, seems self-evident humans don't know the limits of what a human can do, figured the whole thing was promotional and overblown by the same people who think The Facebook Algorithm Is Magic.
That said I'd like to hear a concise more rigorous definition. There may be something to the idea "It's a Godzilla Clippy that talks and will destroy humanity by taking all our resources" but still I'm missing a lot by just going with this impression.
ETA I also didn't know until last week that ChatGPT doesn't learn from its interactions. It just lives in 2022 forever.
ETA2: general public, even if limited to reasonably smart engaged people, poorly served by having huge gulf between "it's a Godzilla Clippy" and "Only a few superior beings can possibly understand it."
@ClubmasterTransparent Why would someone need to know everything that humans can do before defining AGI as "a machine that can do anything a human can do"?
I think most people aren't considering just slightly better chatgpt. Though there is range in what is considered, I think that human level but faster is the typical point.
Though sometimes people are considering within a context of far future or whatever, but usually(?) that's obvious.
So while there are differences in definition of how exactly you define human level (such as whether to include operating a robot efficiently) I think they're less different than people make them out to be.
@Aleph Might help if we ditched the term "human level" completely. It's shedding more heat than light.