Will we get AGI before 2035?

Artificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply its intelligence to a wide variety of problems, much like a human being. Unlike narrow or weak AI, which is designed and trained for specific tasks (like language translation, playing a game, or image recognition), AGI can theoretically perform any intellectual task that a human being can. It involves the capability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.

Resolves as YES if such a system is created and publicly announced before January 1st 2035

Here are markets with the same criteria:












/RemNiFHfMN/will-we-get-agi-before-2033-34ec8e1d00fd (this question)














Related markets:










Other questions for 2035:

















Other points of reference for AGI:








Get แน€600 play money
Sort by:

Seems like a bodged together AGI could happen much sooner. If a bunch of narrow AI can be threaded together, so that it can do most intellectual work at the level of general humans in all domains, it's now just an engineering and profitability issue for big closed source AI providers.

There could even be some attempt by the open source community with no central reputation to consider, who could stitch together such a platform from optional huggingface models for the end user to install as desired, which could approximate agi in the next year or two

If the benchmark in question to prove AGI needs a,b,c, thru x,y,z narrow AI modules to qualify, install those modules, on a big rig, then it can just run those modules necessary for each question, at a time, no datacenter level compute needed

predicts YES

@VAPOR Those are some good points. As things stand in December 2023, I do not believe that creating a swarm of specialised LLMs would constitute AGI (although I think they could well be very capable systems). In particular such a system, with current technology, would not have the ability to rapidly acquire new skills or classes of reasoning, learn continuously or have the ability to reason over arbitrarily long contexts.

It may be possible to address these limitations with incremental innovations in retrieval, in-context learning through external memory and fine-tuning techniques. However, it is currently unclear if these limitations can be overcome at all within the LLM paradigm, or if novel insights are necessary.

I will consider adding some more specific details to the description to better clarify the threshold that must be met in order to resolve this question as YES.

@RemNiFHfMN there's some anecdotal evidence this is what chatgpt 4 has been doing all along (recently things changed again and it's modular nature is much more obvious)

Your definition of AGI might be highly subjective and lacking some guiding objective cornerstone, and every caveat you applied to AGI learning things for example could be applied to each in your "swarm" of narrow ai making up an agi.

The conceptual definition of AGI versus the practical solutions that could achieve qualifying behaviour/output, are not tied to some existing paradigm of big ass singular LLM model on which everything has to happen. I'm not saying you believe that, but it's a common conception of the progression AGI will take.

I'm saying anything that applies to big ass LLM (you said learning reasoning etc) also applies to the "swarm" of little narrow expert narrow AI's, duplication. I figure it's cost effective in terms of training cost and inference, even if you have to invest human effort in continuous improvement of these modules.

In fact in open source this is self occurring. The AGI-like outcome is just utilising the latest narrow agi to include in your speculative "agi" platform. Very hypothetical still, but nothing separates the frontier projects from open source projects except having a coherent grand planner orchestrating things optimally, in the OpenAI example for example

predicts YES

@VAPOR Yes, I broadly agree with what you are saying here. It would be very interesting to witness incremental progress on monolithic or swarm V/LLMs go all the way to AGI.

I will add some more objective definitions in the context of this question.

predicts YES

@VAPOR In the last paragraph I am talking about a not yet existing open source AGI platform described in a previous comment as if it exists, but I'm talking about a near future where it will, so that paragraph sucks, but imagine it will and it makes sense, because it's achievable if hugging face became skynet and brought together all it's narrow agi's and formed them into a Voltron AGI

It's gonna happen

predicts YES

@RemNiFHfMN look up HuggingGPT by Microsoft ๐Ÿ˜Ž

predicts YES

@RemNiFHfMN I forgot about the actual Microsoft paper there, but remembered it's concept, thing is, it's been a long time and the whole momentum behind it (autogpt/babyagi/"narrow agent swarms"?) hasn't materialised into anything that beckons anything beyond '"sparks of AGI"'

That suggests the ball is still in frontier models court. But. They're still going to have to bring it to market and that's a very big job