Will AI usage in 2030 be split in tiers where the best and most expensive models are used rarely compared to cheaper and lower-quality models?
➕
Plus
21
Ṁ941
2030
83%
chance

ChatGPT probably costs an average of "single-digits cents per chat" according to Sam Altman and is offered free to consumers today.

For just 3 chats per minute, that would cost on the order of ~$9 per hour (with large error bars) — approaching the level of human wages!

As large language models grow ever larger, that marginal cost could be set to increase further. I think this fact will be significant for the future of AI.

The key way I see AI doomers predicting incorrectly is failing to account for this exponentially increasing cost when speculating about exponentially increasing intelligence.

The combinatorial nature of our world implies exponentially declining returns for more intelligence. Heuristics can only get us so far, and after that, there's just a giant search space. Think of the traveling salesman problem or Monte Carlo tree search – intelligence is NP-hard.

While I think AI will be utterly transformational, progress may be slower than expected if running the most productive AI's becomes exceedingly expensive and bottlenecked by hardware.

I imagine a world with different tiers of AI used in different ways, where cheaper tiers of AI will be used more.

This market resolves based on my subjective impression of whether this prediction is correct in 2030.


Update: one sufficient condition for resolving NO is if more money is spent on top-tier AI than all the other tiers combined.

Get
Ṁ1,000
and
S3.00
Sort by:

Suppose that in terms of raw usage most LM forward passes are less capable models processing Google search requests but most LM companies are using the best models. How would this resolve?

predictedNO

@NoaNabeshima replace LM companies w "companies that use LMs"

predictedYES

@NoaNabeshima Would lean Yes in this case.

However, another way I will resolve No is if more money is spent on the top tier AI than on all the other tiers combined. Your example could plausibly trigger this condition.

predictedNO

Story for how this resolves No:

Chinchilla gives us a one-time decrease in inference costs per training compute budget.

Future models will be smaller than chinchilla optimal (and so have cheaper inference cost) for a fixed compute budget because of the large cost of post-training inference.

There will be lots of successful research on reducing inference costs (like eg mixture of experts and lower bit floats) that don't also increase effective training compute.

With Chinchilla scaling laws, every 2 OOM of compute is 1 OOM increase in param count (roughly inference cost?).

Suppose that we get 6 OOM increases in training compute by 2030. That's 3 OOM increase in inference compute. I think inference costs might be reduced significantly by that point.

Maybe the best future models will be not too far in cost from the best models today and it'll just be worth a 10x increase in price to use the best model vs the second best for the majority of applications.

Imagine a world where this resolves NO. What would that world look like?

Is "cheaper" purely measured in terms of execution cost? Eg, if a company takes a model and does additional work to make it faster and cheaper to run, perhaps with custom hardware or regularisation, does that count as a cheaper model or a more expensive model?

predictedYES

@MartinRandall Yes, let's say it's about only the execution cost to run the model.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules