What will the inference cost of the best publicly available LM be in 2030?
5
24
2031
12%
.01-.1 docs
23%
.1-1 docs
26%
1-10 docs
12%
10-100 docs
13%
100-1K docs
2%
1K-10K docs
0.2%
10K-100K docs
4%
.001-.01 docs

Consider the best publicly available language model in 2030. For a single 2023 US dollar, how many 2K word documents can I each generate five words for?

I will pick a somewhat conservative estimate of the best inference cost I can achieve after working for three weeks with whatever funds I have at the time and without using publicly inaccessible resources.

Multimodal models that can operate on text count as LMs for the purpose of this question.

I will only accept answers that range over an OOM like so:
.1-1 docs, 1-10 docs, 10-100 docs

I may trade in this market.

Get แน€200 play money
Sort by:
bought แน€10 of .1-1 docs
bought แน€1 of .01-.1 docs

The new ChatGPT model (https://openai.com/blog/introducing-chatgpt-and-whisper-apis) is $.002/1K tokens, so

187 docs/$ = 1/[(2.005 x 1K words/doc)*(4/3 tok/word)*(0.002 $ / 1K tok)]