Will a language model that runs locally on a consumer cellphone beat GPT4 by EOY 2026?
39
1kṀ29872027
84%
chance
1H
6H
1D
1W
1M
ALL
GPT4-0314
For the locally run model, we refer to the Language Model alone, not augmented with search/RAG/function_call. It needs a minimum throughput of 4 tokens/second
Not sure what benchmarks people use in 2026, but let’s say LMSYS Arena for the moment. Will change depends on the trend.
Current SOTA:
I am not sure Phi3(3.8B) can fit on a phone. If not, the current bests are MiniCPM and Gemma 2B
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
By January 2026, will a language model with similar performance to GPT-4 be able to run locally on the latest iPhone?
81% chance
By January 2026, will we have a language model with similar performance to GPT-3.5 (i.e. ChatGPT as of Feb-23) that is small enough to run locally on the highest end iPhone available at the time?
93% chance
Will there be an AI language model that strongly surpasses ChatGPT and other OpenAI models before the end of 2025?
44% chance
Will a language model comparable to GPT-4 be trained, with ~1/10th the amount of energy it took train GPT-4, by 2028?
92% chance
Will we have an open-source model that is equivalent GPT-4 by end of 2025?
82% chance
Will a single model running on a single consumer GPU (<1.5k 2020 USD) outperform GPT-3 175B on all benchmarks in the original paper by 2025?
86% chance
Will a model as great as GPT-5 be available to the public in 2025?
84% chance
Most popular language model from OpenAI competitor by 2026?
20% chance
Will there be a language model called GPT-5, released by OpenAI, this decade?
95% chance
Will $10,000 worth of AI hardware be able to train a GPT-3 equivalent model in under 1 hour, by EOY 2027?
16% chance