To what extent is GPT-4 AGI?
AGIโ€ขGPT-4
268
864
Never closes
0 - not at all AGI
(0, 0.1] - very slightly AGI
(0.1, 0.3] - slightly AGI
(0.3, 0.6] - somewhat AGI
(0.6, 0.8] - nearing AGI
(0.8, 0.95] - most of the way to AGI
(0.95, 1) - pretty much AGI
[1, 1.1) - it's AGI, but barely
[1.1, 1.3) - it's easily AGI
[1.3, โˆž) - it's well beyond basic AGI

Get แน€200 play money
Sort by:

Now, when there are quite some responses, I would like to ask people who are voting "not at all". What are your arguments?

To me, GPT4 seems very well informed, otherwise rather stupid, but it is very hard not to see that it generalizes and understands at some level.

@Irigi Simply because it cannot learn. If it had even rudimentary capabilities to remember previous chats (which I understand in the context of token limits is a very non-trivial problem) then the absolute lowest I could've possibly voted would've been (0., 0.1]

@Pykess so are you making a distinction between the interface/API vs what it could be if simply given access to storage but voting based on the former?

@SoundsNentindo it is definitely not a limitation of the interface. it's a fundamental limitation of current LLMs: context length. GPT-4 Turbo's context is 128 thousand tokens, which is a massive improvement on previous versions, but not at all enough for continuous chat memory. Note that to fulfill the requirements to be AGI, it doesn't need to have perfect memory of all past conversations, so the context length doesn't have to be increased to solve this problem. If it was able to adjust its weights in response to conversations, therefore learning, then this would be enough and would be a sort of rudimentary way of remembering past conversations.

This poll is a really good litmus test for Manifold's average proficiency in machine learning.

@Pykess tbf it probably also varies a lot by people's definitions of AGI. Personally I set the bar for that quite low (able to do any human intellectual task as good as 50% of the population), while I set the bar for superintelligence much higher.

@TheAllMemeingEye I agree; I did my thesis on machine learning; and honestly I would say there are tiny bits of AGI in GPT4. Its not simply an autocomplete machine. Some papers published indicated some results that model network itself developed certain aspects that mimmics some processes for higher brain functions and reasoning. I get that a lot of people still see those stuff brainless machines. But with all honesty we dont know how our brain works either. Maybe we are also bery simple machines.

Regardless if you shown chatgpt to 1960s folks they would totally call it AGI. This shows that our baseline for AGI shifted throughout the years. And its likely that it will keep shifting until we achieve true ASI.

@Pykess You indicate that there is a clear consensus among the machine learning scientists on the answer, and who votes otherwise just does not know enough. If so, what is the consensus? (I am definitely not an expert in the field.)

@Irigi I suspect that even when we achieve AGI, the "machine learning scientist" consensus will be that we haven't.

@Snarflak lol I like this one:

>"AI is whatever hasn't been done yet."
โ€”Larry Tesler

so true

@Snarflak btw, I think even if the companies behind those AIs are convinced that it is AGI, they might decide that it's in their best interest to conceal it and call it generative AI or weak AGI. Especially because of all the fearmongering going on around AI takeover.

@TheAllMemeingEye in my worldview some animals are smarter than median of humanity. They can't speak, but they can learn from mistakes and be inventive.

So the bar at 50% of humanity is really low bar.

@KongoLandwalker I like a funny quote i read recently: its impossible to design a bear-proof trash bin. Because there significant overlap between the smartest bears and the dumbest humans.

@KongoLandwalker

> So the bar at 50% of humanity is really low bar.

If such AI could replace 50% of jobs, I would say it is immense impact..

@Irigi Plot twist: @Pykess actually thinks that the consensus of ML scientists is that GPT is 60% AGI, and makes fun of the all the Manifold people who have voted lower.

If such AI could replace 50% of jobs, I would say it is immense impact..

To replace most jobs, it needs not more intelligence, but hands.

@Irigi Literature on this topic is mainly in the 0 or (0 0.1] category with only the most extreme views in the (0.1, 0.3] range. There are some reports/internal white papers (note that these are not publications from external groups) that tread into the (0.3, 0.6] regime. At the risk of overgeneralizing, the trend seems to be that the farther people are from CS/ML research and/or the closer they are to business/entrepreneur the higher they evaluate such models.

This poll has indeed showed that once again the wisdom of the crowd is quite powerful (of course, that's the whole point of Manifold), since this poll aligns very closely with what the opinions that the wider research community/world shares.

Edit: when I wrote my original "litmus test" comment there were only like 5 votes in the poll. I did not mean to insinuate that poll participants were wrong already.

@TheAllMemeingEye I agree that it would depend on one's definition of AGI. But there is an established and accepted definition of AGI - and being familiar with this definition (i.e. ML familiarity/proficiency/experience) is important. If I made a poll that asked "Are Apples Oranges?" it would matter a lot if someone's own definition of "Apple" was "any edible part of a plant" like that of old French. This is of course silly, but highlights that not only would proficiency in ML lead to a different evaluation, but even just considering a different question altogether.

@Pykess I really don't think so, it's just a meaningless definition test. Everyone basically knows what GPT-4 capabilities are, so it's just a matter of whether you consider that AGI or not.

@TomGoldthwait I don't mean any specific test for AGI is established, but rather that for an algorithm to be AGI it needs to be capable of continued learning. GPT is static and doesn't remember past conversations at the moment.