
Will a different machine learning architecture that is much faster or much cheaper (at least 5x) than current SOTA (Transformers), for both inference and training, be released in 2023?
44
830Ṁ6197resolved Jan 9
Resolved
NO1H
6H
1D
1W
1M
ALL
Feb 19, 7:42pm: Will a different machine learning architecture that is much faster (at least 5x) than current SOTA (Transformers) be released in 2023? → Will a different machine learning architecture that is much faster or much cheaper (at least 5x) than current SOTA (Transformers), for both inference and training, be released in 2023?
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ80 | |
2 | Ṁ70 | |
3 | Ṁ39 | |
4 | Ṁ30 | |
5 | Ṁ27 |
People are also trading
Related questions
Will Transformer based architectures still be SOTA for language modelling by 2026?
80% chance
By the start of 2026, will I still think that transformers are the main architecture for tasks related to natural language processing?
75% chance
Will there be a reasoning model more powerful than o1-preview, and cheaper and >10x faster than o1-mini, by Nov 12 2025?
84% chance
Will the most capable, public multimodal model at the end of 2027 in my judgement use a transformer-like architecture?
63% chance
Will a transformer based model be SOTA for video generation by the end of 2025?
82% chance
Will different hardware be used for training and for inference of neural-networks? (before 2030)
95% chance
Will humans create a SOTA AI model without Multi-Layer Perceptrons by 2029?
39% chance
If OpenAI makes a transformer sized advancement in the next 5 years, will they publish an accompanying paper?
45% chance
Will a GPT-4 quality model be trained for under $10.000 by 2030?
78% chance
Will a model costing >$30M be intentionally trained to be more mechanistically interpretable by end of 2027? (see desc)
60% chance