
All these predictions are taken from Forbes/Rob Toews' "10 AI Predictions For 2025".
For the 2024 predictions you can find them here, and their resolution here.
You can find all the markets under the tag [2025 Forbes AI predictions].
Note that I will resolve to whatever Forbes/Rob Toews say in their resolution article for 2025's predictions, even if I or others disagree with his decision.
I might bet in this market, as I have no power over the resolution.
Description of this prediction from the article:
No topic in AI has generated more discussion in recent weeks than scaling laws—and the question of whether they are coming to an end.
First introduced in a 2020 OpenAI paper, the basic concept behind scaling laws is straightforward: as the number of model parameters, the amount of training data, and the amount of compute increase when training an AI model, the model’s performance improves (technically, its test loss decreases) in a reliable and predictable way. Scaling laws are responsible for the breathtaking performance improvements from GPT-2 to GPT-3 to GPT-4.
Much like Moore’s Law, scaling laws are not in fact laws but rather simply empirical observations. Over the past month, a series of reports have suggested that the major AI labs are seeing diminishing returns to continued scaling of large language models. This helps explain, for instance, why OpenAI’s GPT-5 release keeps getting delayed.
The most common rebuttal to plateauing scaling laws is that the emergence of test-time compute opens up an entirely new dimension on which to pursue scaling. That is, rather than massively scaling compute during training, new reasoning models like OpenAI’s o3 make it possible to massively scale compute during inference, unlocking new AI capabilities by enabling models to “think for longer.”
This is an important point. Test-time compute does indeed represent an exciting new avenue for scaling and for AI performance improvement.
But another point about scaling laws is even more important and too little appreciated in today’s discourse. Nearly all discussions about scaling laws—starting with the original 2020 paper and extending all the way through to today’s focus on test-time compute—center on language. But language is not the only data modality that matters.
Think of robotics, or biology, or world models, or web agents. For these data modalities, scaling laws have not been saturated; on the contrary, they are just getting started. Indeed, rigorous evidence of the existence of scaling laws in these areas has not even been published to date.
Startups building foundation models for these newer data modalities—for instance, EvolutionaryScale in biology, Physical Intelligence in robotics, World Labs in world models—are seeking to identify and ride scaling laws in these fields the way that OpenAI so successfully rode LLM scaling laws in the first half of the 2020s. Next year, expect to see tremendous advances here.
Don’t believe the chatter. Scaling laws are not going away. They will be as important as ever in 2025. But the center of activity for scaling laws will shift from LLM pretraining to other modalities.