How will people run DeepSeek Coder v2 236B locally by 2025?
➕
Plus
0
Jan 2
50%
Lobotomy levels of quantization (e.g. Q2_K)
50%
Unified memory (e.g. M3 Ultra, Mac Studios)
50%
Non-GPU main memory (e.g. AMD EPYC with 512GB DDR5)
50%
Gaming GPUs in one motherboard (e.g. 4090s)
50%
Tall clustering (e.g. Mac Studios over Thunderbolt)
50%
Wide clustering (e.g. Petals)
50%
HBM FPGA/ASIC dark horse (e.g. AMD rains Versal chips like manna)

DeepSeek Coder v2 is arguably a frontier model from a stellar Chinese team. It has cutting edge efficiency tricks, improved reasoning thanks to extensive code pretraining, and what looks like an excellent math corpus.


It's also an absolute chonker despite being MoE.


What's going to be the path for GPU poors needing to run LLMs in "ITAR mode" without building their own private datacenter or buying NVIDIA racks?


Resolves to as many options that apply, based on public information. If there is no public information resolves based on vibes and a DIY pricing spreadsheet.

(Chinese overlords variant of Llama 3 405B question in case Meta gets cold feet.)

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules