Will there be a gpt-4 quality LLM with distributed inference by the end of 2024?
26
1kṀ8907
resolved Jan 2
Resolved
NO

  • The model has a Elo greater than 1190 on ChatbotArena (or if ChatbotArena is no longer available/updating, achieves GPT 4 (03.14) equivalent or greater performance on both MMLU and MT-Bench)

  • When running inference in a geographically distributed fashion (the computational hardware is not colocated, and is networked over typical consumer equipment)

  • on heterogeneous hardware (the computational hardware is varied in type, e.g. different GPU models)

  • without the act of distributed inference causing the model to require 2 OOM more energy usage (e.g. if doing so Is incredibly lossy and inefficient, it does not count. The burden of proof lies on anyone claiming this clause should be activated)

Note: if (or, when) an edge case is presented, it's applicability to this question will be evaluated in mine + Robert's understanding of the spirit of the question.

Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ162
2Ṁ134
3Ṁ106
4Ṁ66
5Ṁ65
© Manifold Markets, Inc.TermsPrivacy