From a recent arXiv preprint,
We introduce FrontierMath, a benchmark of hundreds of original, exceptionally challenging mathematics problems crafted and vetted by expert mathematicians. The questions cover most major branches of modern mathematics -- from computationally intensive problems in number theory and real analysis to abstract questions in algebraic geometry and category theory. Solving a typical problem requires multiple hours of effort from a researcher in the relevant branch of mathematics, and for the upper end questions, multiple days. FrontierMath uses new, unpublished problems and automated verification to reliably evaluate models while minimizing risk of data contamination. Current state-of-the-art AI models solve under 2% of problems, revealing a vast gap between AI capabilities and the prowess of the mathematical community. As AI systems advance toward expert-level mathematical abilities, FrontierMath offers a rigorous testbed that quantifies their progress.
This question resolves to YES if the state-of-the-art average accuracy score on the FrontierMath benchmark, as reported prior to midnight, January 1st 2028 Pacific Time, is above 85.0% for any fully-automated computer method. Credible reports include but are not limited to blog posts, arXiv preprints, and papers. Otherwise, this question resolves to NO.
I will use my discretion in determining whether a result should be considered valid. Obvious cheating, such as including the test set in the training data, does not count.
Update 2024-21-12 (PST): There is no maximum inference budget for this benchmark - models can use any amount of compute to solve the problems. (AI summary of creator comment)
[Asking this on all the FrontierMath markets.]
If FrontierMath changes (e.g. if Tier 4 is added to what's considered to be the official FrontierMath benchmark), how does that affect the resolution of this question?
It seems to me like the fair way to do it is to go based on the original FrontierMath benchmark (modulo small tweaks/corrections), but I'm not totally sure that in the future we will have benchmark scores that are separated out by original vs. new problems.
@EricNeyman Excellent question! IMO the best option is for the guys from Epoch to create separate benchmark for Tier 4 or to fork FrontierMath and give it a new name like FrontierMath-Enhanced, or something like that.
"the fair way to do it is to go based on the original FrontierMath benchmark" - I agree.
@Metastable I agree! But it would also be nice to have clarity on what happens if Epoch doesn't separate them cleanly like that.
@mathvc Seems literally false as stated because:
- AlphaProof needs formalized statements in lean.
- I don't see how you can use the system to generate numeric answers effectively (the modality is somewhat different from IMO).
@RyanGreenblatt FWIW, 2024 IMO P6 has a numeric answer (find the smallest possible c such that ...) which AlphaProof independently found, i.e. it wasn't told to prove that the answer is the smallest, it independently made a conjecture on the answer and then proved it was the smallest. There are also many other olympiad problems that require numeric answers, so I don't think it's too much of a stretch to assume AlphaProof is capable of solving such problems.
@zsig At least some problems in frontiermath have concepts not supported in Lean/mathlib, so I’m not sure the AlphaProof setup can be applied as is.
@TamayBesiroglu I agree that seems like the biggest hurdle too given that IMO problems are designed to be elementary; if AlphaProof has to reinvent algebraic geometry to solve a problem, it likely won't solve it.
@zsig AlphaProof was trained on a lot of grad math too. I don’t think it had to reinvent algebraic geometry. This knowledge is already available in it
@zsig in their blog post they claim they trained it on curriculum ranging from basic math to advanced math from research papers afaik
I assume that an "AI model" is being interpreted broadly (e.g., it still counts as an "AI model" if it's hooked up to Lean and which gets to use library_search
), but it would be good to clarify.
Frankly I think the easiest criteria would be if any fully automated method is able to achieve 85.0% or above; it's not like any non-AI methods are showing any signs that they might get there.