Will a machine learning model score above 50.0% on the MATH dataset before 2025?
Basic
17
Ṁ11kresolved Jun 30
Resolved
YES1D
1W
1M
ALL
From Hendrycks et al (https://arxiv.org/abs/2103.03874),
> Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. [...]
> Even though we are able to increase accuracy on MATH, our results show that accuracy remains relatively low, even with enormous Transformer models. Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue. While scaling Transformers is automatically solving most other text-based tasks, scaling is not currently solving MATH. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community.
In addition,
> It's also worth mentioning the competition maths problems in MATH are designed under the assumption that competitors don't use calculators or script executors. That way, solving them requires making a clever observation or reducing the search space to make the problem tractable. With a script executor, competitors do not need to figure out how to succinctly reason to the conclusion and cleverness is rarely needed.
> There are other competition problems designed to be difficult even with calculators and script executors, but there are not nearly as many of these problems lying around.
The best model in the paper only received an average accuracy of 6.9% on the dataset.
This question resolves to YES if the state-of-the-art average accuracy score on the MATH dataset, as reported prior to January 1st 2025 Eastern Time, is above 50.0%. Credible reports include but are not limited to blog posts, arXiv preprints, and papers. Otherwise, it resolves to NO.
I will use my discretion in determining whether a result should be considered valid. Obvious cheating, such as including the test set in the training data, does not count. Only results that use a no-calculator restriction will count.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Sort by:
Seems like this already happened? https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html :O
Related questions
Related questions
Will an AI achieve >85% performance on the FrontierMath benchmark before 2028?
53% chance
Will an AI system be reported by OpenAI as of December 31st 2025 as having a pre-mitigation score of...
Will an AI achieve >85% performance on the FrontierMath benchmark before 2027?
29% chance
Will there be a model that has a 75% win rate against the latest iteration of GPT-4 as of January 1st, 2025?
70% chance
Will an AI score over 10% on FrontierMath Benchmark in 2025
67% chance
Will OpenAI Release a Model Capable of Reliably performing Gradeschool Math from Reasoning by Jan 1, 2025?
79% chance
Will an AI SWE model score higher than 50% on SWE-bench in 2024?
20% chance
Will an AI model outperform 95% of Manifold users on accuracy before 2026?
56% chance
Will any model get above human level on the Simple Bench benchmark before September 1st, 2025.
55% chance
Will an AI get a perfect SAT score before 2025?
14% chance