LLMs by EOY 2025: Will Retentive Learning Surpass Transformers? (Subsidised 400 M$)
34
239
1.1k
2025
14%
chance

Will Retentive Learning be widely consider to be better than transformer based architectures for frontiers LLMs 2025? 



Original Paper - https://arxiv.org/abs/2307.08621



Based on this video - https://www.youtube.com/watch?v=ec56a8wmfRk


Resolves as "Yes" if:
By the end of the year 2025, Retentive Learning architectures are generally considered superior to Transformer-based architectures for Large Language Models (LLMs), as evidenced by peer-reviewed academic publications, benchmark performance metrics, or widespread industry adoption.



Resolves as "No" if:
By the end of the year 2025, Retentive Learning architectures are generally considered inferior to Transformer-based architectures for Large Language Models (LLMs), as evidenced by peer-reviewed academic publications, benchmark performance metrics, or limited industry adoption.

Get Ṁ600 play money
Sort by:

‘If an entirely different architecture emerges and surpasses both Retentive Learning and Transformer-based models, this market will also resolve as "Yes."‘

The title of this question is misleading with respect to this. Maybe the question should be “Will transformers no longer be the SOTA architecture by EOY 2025?”

predicts NO

@AdamK Oof thanks for flagging this

@AdamK Thanks for flagging, I have removed that sentence now.

I was intending to get at the idea of an architecture which displaces both Retentive and Transformer based models, making them both superfluous, but nonetheless I have strong reason to suspect Retentive is superior to Transformer-based models.

See also

More related questions