
A major ML paper demonstrates symbolic-enhanced transformer successor outperforming standard transformers by March 2025
17
Ṁ1kṀ4.1kresolved Apr 2
Resolved
NO1H
6H
1D
1W
1M
ALL
Will a published machine learning paper demonstrate a new architecture that combines transformers with symbolic methods (category theory, programming language theory, or logic theory) and achieves superior performance on standard benchmarks compared to traditional transformer-only architectures?
Resolution criteria: Paper must be published on arXiv, and very much preferably a major ML conference/journal, and show statistically significant improvements over baseline transformers on multiple standard tasks.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ458 | |
| 2 | Ṁ214 | |
| 3 | Ṁ124 | |
| 4 | Ṁ116 | |
| 5 | Ṁ80 |
People are also trading
Related questions
Will an AI achieve >85% performance on the FrontierMath benchmark before 2028?
61% chance
On January 1, 2027, a Transformer-like model will continue to hold the state-of-the-art position in most benchmark
89% chance
Will superposition in transformers be mostly solved by 2026?
47% chance
If OpenAI makes a transformer sized advancement in the next 5 years, will they publish an accompanying paper?
45% chance
Will transformer architectures lose their dominant position in deep learning before 2028?
16% chance
Will the most capable, public multimodal model at the end of 2027 in my judgement use a transformer-like architecture?
63% chance
Are Transformers “the last technology to do all of ML” (before AGI)?
Will Transformer-Based LLMs Make Up ≥75% of Parameters in the Top General AI by 2030?
50% chance
Will there be a significant advancement in frontier AI model architecture by end of year 2026?
24% chance
Will an AI achieve >85% performance on the FrontierMath benchmark before 2027?
35% chance