MANIFOLD
Are Transformers “the last technology to do all of ML” (before AGI)?
6
Ṁ100Ṁ102
2030
11%
Yes: transformers scale the best
46%
No, but transformers are still key (the breakthoughs augment upon transformers)
7%
No, but the new technology is still a “large foundation model” (non-transformer e.g. Mamba, Diffusion)
35%
No, a paradigm shift is needed (e.g. JEPA, etc.

In his “exit interview” from OpenAI, Jerry Tworek says he believes transformers are “pretty clearly not” the last technology to all of ML (https://youtu.be/VaCq4u5c78U&t=33m13s, more context: https://youtu.be/VaCq4u5c78U&t=32m39s)

In contrast when asked about the limitations of LLMs, Demis Hassabis said

while he believes there likely needs to be 1-2 more breakthroughs (e.g. continual learning, better memory / context windows, long term vision) rather than pure scaling of existing technology to reach AGI, he disagrees that they are “a dead end” saying that “large foundation models” are a key component of the final AGI systems

clip: https://m.youtube.com/watch?v=bgBfobN2A7A&t=1m39s

AI generated resolution criteria (low weight):

Resolution criteria

This market resolves YES if, by the resolution date, there is credible evidence that transformers are "the last technology to do all of ML" before AGI is achieved. This would require demonstrating that no fundamentally new architectural paradigm or technological breakthrough was necessary beyond transformer-based systems to reach AGI.

The market resolves NO if credible evidence shows that additional architectural innovations, algorithmic breakthroughs, or technological paradigms beyond transformers were essential to achieving AGI.

Resolution will be determined by statements from leading AI researchers, published research, and documented technical developments at major AI labs (OpenAI, DeepMind, Anthropic, etc.) regarding what technologies were actually required to achieve AGI.

Background

The transformer architecture was introduced in 2017 in the paper "Attention Is All You Need" by researchers at Google. The transformer blueprint has become the foundation of nearly every modern AI system.

The question hinges on a fundamental disagreement among AI researchers about whether transformers alone are sufficient for AGI. Jerry Tworek was a key player in building GPT-4 and ChatGPT, ran the "Reasoning Models" team, and was part of the core group behind the o1 and o3 models. In contrast, Demis Hassabis stated there was a 50% chance AGI might be achieved within the decade, though not through models built exactly like today's AI systems, and elaborated that "maybe we need one or two more breakthroughs before we'll get to AGI," identifying gaps including the ability to learn continuously, better long-term memory, and improved reasoning and planning capabilities.

Considerations

The debate reflects a genuine technical disagreement about AGI's prerequisites. Hassabis argues that simply making models larger (scaling) is no longer enough, and that a qualitative leap is needed to fill key pieces on the path to AGI. Yann LeCun has argued that LLMs will never achieve humanlike intelligence and that a completely different approach is needed. However, Hassabis also noted that "we're seeing incredible gains with the existing techniques" while "inventing new things all the time," and that reaching AGI "may require one or two more new breakthroughs." This suggests the disagreement is not whether transformers are useful, but whether they alone are sufficient without complementary innovations.

Market context
Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy