Will the first AGI be a large language model?
47
219
1k
2040
25%
chance

AGI strictly refers to strong AI that surpasses or matches humans at every task. Any architecture that primarily relies on using next-token prediction, such as GPT 4 or Claude 3 would count as an LLM.

Get Ṁ600 play money
Sort by:

I'll be seeing myself out of this market with the warning to those who think to answer the proposed question beyond "what will manifold voters think will be the first AGI to a first approximation?" Don't expect much discussion or calibration.

I am voting a big YES for a reason I think current NO voters are glossing over. This market can only resolve based on subjective measures EVEN IF the criteria by which we, as a !SoCiEtY?, have agreed to judge whether or not a thing counts as AI.

No matter how HOW GOOD an AI gets at doing tasks with tangible consequences, its ability to do so will be categorized as SKILL rather than intelligence UNLESS it can either communicate some reasoning behind altering its task (intelligently) OR it can follow complex instructions as input for the task, EITHER of which has no clear path without an LLM element which would be the primary element by which its intelligence would be assessed.

Put another way, the rationalization of action without explanation will be "skill," while the rationalization of complex communication WITHOUT action is that it is an intelligence without the proper outputs and Heuristics to translate into real world "skills."

So EVEN IF someone who votes YES would be technically correct, we could never know (without a novel method to assess intelligence without language).

bought Ṁ30 NO

suppose AGI is achieved through a combination of LLMs or an LLM Agent, but the individual LLM alone wouldn't count as AGI. also, suppose that the stuff surrounding this model was >50% of the reason why it's AGI. Would this scenari resolve yes?

bought Ṁ100 NO

Transformers that are multimodal including physical embodiment and spacial awareness? Possibly.

LLMs? No way.

@ConnorDolan but, then, how do you think they will communicate their intelligence? How will they take complex inputs to accomplish complex tasks?

Note that it's not that I think you are wrong per se, but I think you may be missing the key gatekeeper as to how we would recognize something as intelligence in a collective sense. We will intentionally program them to be limited to their niche if they can neither receive natural language prompts or output complex reasoning in recognizable language (therefore likely also natural language) for any Autonomous decisions they have made.

I think your point is perfectly situated for the inevitable debate: Is it the mulitmodal embodiment or the LLM that is "primary" for the system that we are otherwise agreeing to label as AGI? Did the multimodal "senses" inform the language more than the language "informed" those senses?

That we are likely to start with LLM style input as the basis for our complex inputs for complex AI outputs makes it far more likely (gatekeeping via our biases) that we will recognize AGI as its outputs filtered through a (subjectively) necessary condition of its ability to sufficiently communicate the novelty to which it is responding, or to communicate the novelty it decided to add autonomously.

If it creates something amazing, but cannot describe why it is amazing, then it is unlikely that we will find agreement that this is the product of AGI and not the product of stochastic selection.

@alieninmybeverage Maybe this a good time for the market maker to clarify, but I interpreted it to mean that this first AGI must be exclusively a LLM.

I think it's very possible it will be a transformer that interprets semantic meaning across different types of inputs, including natural language, but I wouldn't think having a single module or a subset of it's abilities be equivalent to a LLM makes it fair to call it a LLM.

Edit: Never-mind, they do clarify in comments and description (I should read, and am not invested in this market). Yeah, I don't think a LLM is going to do the bulk of heavy lifting in an AGI. Visual and spatial embodiment are much more data rich.

What if it each computing step involves predicting a next token, but it has a complex chain of thought, memory and other components, and the next token prediction capacity on a text corpus is similar to that of current models. Would you still consider it an LLM?

@EmanuelR sounds to me like your describring GPT 4. Which is definitely an LLM. Adding many steps doesn't change the fact that it's still doing next most likely token prediction

Where's the line between "is a" and "has a" when you say "primarily"?

@JamesBakerc884 any model where its intelligence is mainly evaluated based on how well it does next token prediction, counts as an LLM.

@No_ones_ghost ...Does saying "mainly" instead of "primarily" make the edge cases any more clear?

Mainly or primarily both means more than 50% if you want the strictest possible definition

@No_ones_ghost Okay, and whose judgement of the percentage-of-intelligence attributable to the various parts of the architecture will this market rely on? Yours, or some authority, or will you NA for some range of ensemble architectures?

@JamesBakerc884 I use the duck test

@No_ones_ghost Are you going to participate in the market?

More related questions