Is LeCun right that open-source AI will soon become 'unbeatable'? (EOY 2025)
355
1.8kṀ130k
2026
15%
chance

On Oct 14 2023, Yann LeCun (Chief AI Scientist at Meta) stated: "Open source AI models will soon become unbeatable. Period."

Resolves YES if, at the end of 2025, it's decisively clear (in the judgment of Eliezer Yudkowsky) that open-source LLMs (or their successors in the role of widely used AGI tech) are more powerful or more cost-efficient than their closed-source alternatives. That is, if either all the leaderboards are full of open-source LLMs with successors to GPT-4 or Claude being far behind, or if most of the business spending for inference seems to be on running AI models built on open-source foundation models, this resolves YES.

If it's hard to tell or if that seems wrong, resolves NO. "Unbeatable" seems like it shouldn't be subtle.

Get
Ṁ1,000
to start trading!
Sort by:

Almost any product in China is running on deepseek now. How do we measure inference spending though? That's not public data and impossible to acquire since it's so decentralized.

opened a Ṁ500 NO at 50% order

With this definition of "unbeatable" I do not see how any answer than NO is possible

This is true as of right now - things might change by the end of the year but R1 is indisputably superior to all other models when looking at the combination of cost-efficiency and ability.

@Balasar I don't think it's true that, as of right now, "either all the leaderboards are full of open-source LLMs with successors to GPT-4 or Claude being far behind, or if most of the business spending for inference seems to be on running AI models built on open-source foundation models"


@SemioticRivalry that's pretty fair, and I suppose I should have read closer. The question is designed to resolve to NO even if the spirit of the question is met.

@Balasar R1 is #6 at LLM arena right now. I like R1 and use it pretty often, it's very good, but it's not "undisputably superior to all other models".

@ProjectVictory Comment was a month ago, 3 of the higher models were made since then, and the others were substantially more expensive to produce

@Balasar It's not asking about the pareto front between capability and cost, it's talking about overall capability. I don't think it's reasonable to say that R1 is very close to beating the best models right now, though I don't think it's impossible that Deepseek produces and open sources such a model.

@DarklyMade Actually, its asking exactly that: "more powerful or more cost-efficient"

@Balasar Ah yeah, sorry my reading comprehension is often disappointing. It seems like the question should resolve yes if the pareto front of cost and capability leads open source models to consume the majority of inference spending, or the pure capabilities of open source models would lead to them dominating leaderboards. Since leaderboards have no measure for cost efficiency.

Deepseek is a serious contender

@mods in the Order Book graph ("Cumulative shares vs probability"), the green graph which shows the cumulative YES limit orders seems to be off. There are 2'383 YES limit orders in total, but the graph shows 140k?

Also, the user Mira does not exist anymore, but their limit orders are still listed.

@JonasSourlier The graph shows the total number of potential shares to be bought with the limit orders, while the order book shows the amount of potential mana required to buy the shares. For example, Mira’s 1001 mana order at 1% buys 100100 shares, and is listed as 1001 in one place and 100100 in the other.

I’m pretty sure it’s normal that the limit order of a deleted account is still there.

Let me know if you have more questions.

@JonasSourlier

There are 2'383 YES limit orders in total, but the graph shows 140k?

Note that the limit orders above are denominated in mana ("M"), while the chart below is denominated in "shares". Note that a 1M limit order at 1% is worth 100 shares.

Also, the user Mira does not exist anymore, but their limit orders are still listed.

FWIW the mods don't have direct input on how the site works (for feedback you can use the Discord)

@bagelfan I was too slow!

actual inference spend seems to be extremely low

OpenAI was only collecting about $700m in inference revenue. Assuming that both Google and Anthropic have less inference revenue than OpenAI, we have less than $2.1bn total inference spend across all the major players.

Fascinating. You don't hear that figure in the news as much.

The gap is closed with 405b and we still got 1.5 years

That is, if either all the leaderboards are full of open-source LLMs with successors to GPT-4 or Claude being far behind

this seems very unlikely!

or if most of the business spending for inference seems to be on running AI models built on open-source foundation models

this seems less unlikely but still unlikely. Google, openai, and anthropic are still gonna be competing!

The second some VC AI companies fail H100s start to flood the market were gonna have Llama405b instances everywhere. And then? It would be the cheapest option by far.


Why won't google, openai, and anthropic match those prices?

(Also, aren't all of those actively developing better models?)

The open-source model Deepseek is currently the clear leader in cost-efficiency: (Screenshot from https://artificialanalysis.ai/)

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules