4) The most advanced closed models will continue to outperform the most advanced open models by a meaningful margin.
Dec 31

All these predictions are taken from Forbes/Rob Toews' "10 AI Predictions For 2024".
For the 2023 predictions you can find them here, and their resolution here.
You can find all the markets under the tag [2024 Forbes AI predictions].

  • I will resolve to whatever Forbes/Rob Toews say in their resolution article for 2024's predictions.

  • I might bet in this market, as I have no power over the resolution.

Description of this prediction from the article:
One important topic in AI discourse today is the debate around open-source and closed-source AI models. While most cutting-edge AI model developers—OpenAI, Google DeepMind, Anthropic, Cohere, among others—keep their most advanced models proprietary, a handful of companies including Meta and buzzy new startup Mistral have chosen to make their state-of-the-art model weights publicly available.

Today, the highest-performing foundation models (e.g., OpenAI’s GPT-4) are closed-source. But many open-source advocates argue that the performance gap between closed and open models is shrinking and that open models are on track to overtake closed models in performance, perhaps by next year. (This chart made the rounds recently.)

We disagree. We predict that the best closed models will continue to meaningfully outperform the best open models in 2024 (and beyond).

The state of the art in foundation model performance is a fast-moving frontier. Mistral recently boasted that it will open-source a GPT-4-level model sometime in 2024, a claim that has generated excitement in the open source community. But OpenAI released GPT-4 in early 2023. By the time Mistral comes out with this new model, it will likely be more than a year behind the curve. OpenAI may well have released GPT-4.5 or even GPT-5 by then, establishing an entirely new performance frontier. (Rumors have been circulating that GPT-4.5 may even drop before the end of 2023.)

As in many other domains, catching up to the frontier as a fast follower, after another group has defined it, is easier to achieve than establishing a new frontier before anyone else has shown it is possible. For instance, it was considerably riskier, more challenging and more expensive for OpenAI to build GPT-4 using a mixture-of-experts architecture, when this approach had not previously been shown to work at this scale, than it was for Mistral to follow in OpenAI’s footsteps several months later with its own mixture-of-experts model.

There is a basic structural reason to doubt that open models will leapfrog closed models in performance in 2024. The investment required to develop a new model that advances the state of the art is enormous, and will only continue to balloon for every step-change increase in model capabilities. Some industry observers estimate that OpenAI will spend around $2 billion to develop GPT-5.

Meta is a publicly traded company ultimately answerable to its shareholders. The company seems not to expect any direct revenue from its open-source model releases. Llama 2 reportedly cost Meta around $20 million to build; that level of investment may be justifiable, even without any associated revenue boost, given the strategic benefits. But is Meta really going to sink anywhere near $2 billion into the quest to build an AI model that outperforms anything else in existence, just to open-source it without any expectation for a concrete return on investment?

Upstarts like Mistral face a similar conundrum. There is no clear revenue model for open-source foundation models (as Stability AI has learned the hard way). Charging for hosting open-source models, for instance, becomes a race to the bottom on price, as we have seen in recent days with Mistral’s new Mixtral model. So—even if Mistral had access to the billions of dollars needed to build a new model that leapfrogged OpenAI—would it really choose to turn around and give that model away for free?

Our sneaking suspicion is that, as companies like Mistral invest ever greater sums to build ever more powerful AI models, they may end up relaxing their stance on open source and keeping their most advanced models proprietary so that they can charge for them.

(To be clear: this is not an argument against the merits of open-source AI. It is not an argument that open-source AI will not be important in the world of artificial intelligence going forward. On the contrary, we expect open-source models to play a critical role in the proliferation of AI in the years ahead. However: we predict that the most advanced AI systems, those that push forward the frontiers of what is possible in AI, will continue to be proprietary.)

Get Ṁ600 play money

More related questions