Will a machine learning model score above 50.0% on the MATH dataset before 2025?
17
102Ṁ11kresolved Jun 30
Resolved
YES1D
1W
1M
ALL
From Hendrycks et al (https://arxiv.org/abs/2103.03874),
> Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. [...]
> Even though we are able to increase accuracy on MATH, our results show that accuracy remains relatively low, even with enormous Transformer models. Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue. While scaling Transformers is automatically solving most other text-based tasks, scaling is not currently solving MATH. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community.
In addition,
> It's also worth mentioning the competition maths problems in MATH are designed under the assumption that competitors don't use calculators or script executors. That way, solving them requires making a clever observation or reducing the search space to make the problem tractable. With a script executor, competitors do not need to figure out how to succinctly reason to the conclusion and cleverness is rarely needed.
> There are other competition problems designed to be difficult even with calculators and script executors, but there are not nearly as many of these problems lying around.
The best model in the paper only received an average accuracy of 6.9% on the dataset.
This question resolves to YES if the state-of-the-art average accuracy score on the MATH dataset, as reported prior to January 1st 2025 Eastern Time, is above 50.0%. Credible reports include but are not limited to blog posts, arXiv preprints, and papers. Otherwise, it resolves to NO.
I will use my discretion in determining whether a result should be considered valid. Obvious cheating, such as including the test set in the training data, does not count. Only results that use a no-calculator restriction will count.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ84 | |
2 | Ṁ80 | |
3 | Ṁ79 | |
4 | Ṁ67 | |
5 | Ṁ17 |
Sort by:
Seems like this already happened? https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html :O
Related questions
Related questions
Will an AI score over 80% on FrontierMath Benchmark in 2025
37% chance
Will an AI achieve >85% performance on the FrontierMath benchmark before 2028?
81% chance
Will any AI get a score of at least 45% on Humanity’s Last Exam benchmark before March 11, 2025?
12% chance
Will any AI model score >80% on Epoch's Frontier Math Benchmark in 2025?
26% chance
Will any AI model achieve > 40% on Frontier Math before 2026?
92% chance
Will an AI model outperform 95% of Manifold users on accuracy before 2026?
58% chance
Will AI image generating models score >= 90% on Winoground by June 1, 2025?
82% chance
Will an AI achieve >85% performance on the FrontierMath benchmark before 2027?
64% chance
Will an AI achieve >80% performance on the FrontierMath benchmark before 2027?
80% chance
Will any model get above human level on the Simple Bench benchmark before September 1st, 2025.
68% chance