
an LLM as capable as GPT-4 runs on one 4090 by March 2025
13
Ṁ1kṀ845Mar 2
35%
chance
1H
6H
1D
1W
1M
ALL
e.g. Winograde >= 87.5%
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
Sort by:
does it count that I can run the llm while also using cpu ram offloading just like ollama does automatically? (it would be very slow, but would work)