Will an AI score over 80% on FrontierMath Benchmark in 2025
112
1kṀ120k
Dec 31
13%
chance

"Today we're launching FrontierMath, a benchmark for evaluating advanced mathematical reasoning in AI. We collaborated with 60+ leading mathematicians to create hundreds of original, exceptionally challenging math problems, of which current AI systems solve less than 2%.
Existing math benchmarks like GSM8K and MATH are approaching saturation, with AI models scoring over 90%—partly due to data contamination. FrontierMath significantly raises the bar. Our problems often require hours or even days of effort from expert mathematicians.
We evaluated six leading models, including Claude 3.5 Sonnet, GPT-4o, and Gemini 1.5 Pro. Even with extended thinking time (10,000 tokens), Python access, and the ability to run experiments, success rates remained below 2%—compared to over 90% on traditional benchmarks."

Get
Ṁ1,000
to start trading!
Sort by:

As a Math PhD student who uses LLMs for math every day, I can say that there is still a long way to go before LLMs can effectively solve these problems. The only scenario where I see this happening soon is if models like AlphaProof scale significantly more than expected within the next year.

@Grothenfla have you tried using our lord and savior o1-pro ?

@MalachiteEagle Not yet... But o1-pro's AIME benchmark is similar to this new model Kimina (86%). I believe these formal theorem provers are the real contenders.

@Grothenfla i'm curious what you think of the new natural language models that got gold on the IMO? Like has your view updated? I think that also alphaproof-like systems will become quite good but it's less clear to me than it was when you wrote that comment that they would be better than the general reasoners

@Bayesian Yes, I have updated my view after Gemini's and OpenAI. To be honest I thought that we were very far away from a achievement like this. We still know little about these LLMs but now I think it is possible that they become as good as the best mathematicians in the coming years. But I still think systems like alphaproof still have a better chance in becoming superhuman mathematicians similar to AlphaGo and AlphaZero becoming superhuman in their respective areas.

@Grothenfla yeah i pretty much agree! the only consideration that goes against that is that i could imagine there being like several orders of magnitude (200x?) more research effort and compute allocated to general reasoners than to alphaproof/alphazero-like systems, and there could be enough far transfer or wtv, that this ends up being wrong? Ig i'd add that the narrow math systems would likely become much better at proofs for a similar amt of effort, but probably are hopeless for some tasks mathematicians have to do that are more diverse than theorem proving, like deciding which research direction to go into next and so on. exciting times..?

sold Ṁ12 YES

o4 is coming

@MalachiteEagle by end of year we might already have o6 (build into gpt5 or whatever they will call it)

@sponge can you clarify whether you will use official epoch results or outside claims?

fk it i'll buy more at 35%

@Bayesian Did something change your mind? It seems like Grok 3 isn't quite at the level of o3 on math and coding benchmarks, even with reasoning enabled.

@TimothyJohnson5c16 polymarket has a market that has given me confidence that I am right

(90% is much harder than 80% bc there's around a 10% error rate, ie ~10% of problems can't be solved correctly according to the benchmark)

@Bayesian Yeah, if I were on Polymarket, I would just call that free money.

@Bayesian Though of course, 10% error rate is also an estimate, right? In the world where an AI model reaches 90%, the error rate is probably a lot lower.

@TimothyJohnson5c16 yeah 10% error is an estimate, the errors they found were like 6% iirc, and they estimated # of errors they would have missed

@Bayesian that a big deal if there's a 10% error rate though right?

@NebulaByte I don’t understand your comment could you rephrase

@Bayesian what did you mean by 10% error in your comment above?

@NebulaByte i think i mean that about 10% of (question, answer) pairs in the benchmark are eroneous in some way, either that the question has some hidden assumptions that make it not necessarily solvable, or the answer is incorrect (the person solving the problem to add it to the benchmark made a mistake in some step) or something like that

opened a Ṁ2,000 NO at 35% order

@Bayesian want to buy more? I put a limit order at 35%.

opened a Ṁ1,000 NO at 35% order

Lol, Acceleration took most of it, so I added some more.

opened a Ṁ5,000 YES at 36% order

@TimothyJohnson5c16 I was scared but i change my mind ill buy more. YES order at 36%

opened a Ṁ10,000 NO at 40% order

@Bayesian if you still want more I put a limit order at 40

opened a Ṁ5,000 NO at 35% order

@Bayesian I think yours was taken already? I put more NO at 35%.

© Manifold Markets, Inc.TermsPrivacy