How will people run DeepSeek Coder v2 236B locally by 2025?
Plus
0
Jan 2
50%
Lobotomy levels of quantization (e.g. Q2_K)
50%
Unified memory (e.g. M3 Ultra, Mac Studios)
50%
Non-GPU main memory (e.g. AMD EPYC with 512GB DDR5)
50%
Gaming GPUs in one motherboard (e.g. 4090s)
50%
Tall clustering (e.g. Mac Studios over Thunderbolt)
50%
Wide clustering (e.g. Petals)
50%
HBM FPGA/ASIC dark horse (e.g. AMD rains Versal chips like manna)
DeepSeek Coder v2 is arguably a frontier model from a stellar Chinese team. It has cutting edge efficiency tricks, improved reasoning thanks to extensive code pretraining, and what looks like an excellent math corpus.
It's also an absolute chonker despite being MoE.
What's going to be the path for GPU poors needing to run LLMs in "ITAR mode" without building their own private datacenter or buying NVIDIA racks?
Resolves to as many options that apply, based on public information. If there is no public information resolves based on vibes and a DIY pricing spreadsheet.
(Chinese overlords variant of Llama 3 405B question in case Meta gets cold feet.)
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will most computers still run on silicon-based hardware in 2100?
46% chance
Will an algorithm be able to work on million-line codebases before 2026?
44% chance
Will I complete Practical Deep Learning for Coders in 2024?
57% chance
What will be the best score on the InterCode (Bash) benchmark before 2025?
71% chance
Will compute get a million times more energy efficient by 2075?
10% chance
Will the need for coding done by humans become almost entirely obsolete by the end of 2024?
3% chance