In what year will a GPT4-equivalent model be able to run on consumer hardware?

"Consumer hardware" is defined as costing no more than $3,000 USD for everything that goes inside the case (not including peripherals).

In terms of "GPT4-equivalent model," I'll go with whatever popular consensus seems to indicate the top benchmarks (up to three) are regarding performance. The performance metrics should be within 10% of GPT4's. In the absence of suitable benchmarks I'll make an educated guess come resolution time after consulting educated experts on the subject.

All that's necessary is for the model to run inference, and it doesn't matter how long it takes to generate output so long as you can type in a prompt and get a reply in less than 24 hours. So in the case GPT4's weights are released and someone is able to shrink that model down to run on consumer hardware and get any output at all in less than a day, and the performance of the output meets benchmarks, this resolves to whatever year that first happens in.

Get Ṁ600 play money
Sort by:

Pretty close with llama 3 70b

I think this is already true given the weights being released

@mayajalen Link me to something convincing?

bought Ṁ10 HIGHER

@LarsDoucet Currently the best open source model you can run is Mixtral 8x7b. According to chatbot arena it's about as good as ChatGPT 3.5, so not yet.