Will a machine learning model score above 50.0% on the MATH dataset before 2025?
17
792
Ṁ11KṀ102
resolved Jun 30
Resolved
YES1D
1W
1M
ALL
From Hendrycks et al (https://arxiv.org/abs/2103.03874),
> Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. [...]
> Even though we are able to increase accuracy on MATH, our results show that accuracy remains relatively low, even with enormous Transformer models. Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue. While scaling Transformers is automatically solving most other text-based tasks, scaling is not currently solving MATH. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community.
In addition,
> It's also worth mentioning the competition maths problems in MATH are designed under the assumption that competitors don't use calculators or script executors. That way, solving them requires making a clever observation or reducing the search space to make the problem tractable. With a script executor, competitors do not need to figure out how to succinctly reason to the conclusion and cleverness is rarely needed.
> There are other competition problems designed to be difficult even with calculators and script executors, but there are not nearly as many of these problems lying around.
The best model in the paper only received an average accuracy of 6.9% on the dataset.
This question resolves to YES if the state-of-the-art average accuracy score on the MATH dataset, as reported prior to January 1st 2025 Eastern Time, is above 50.0%. Credible reports include but are not limited to blog posts, arXiv preprints, and papers. Otherwise, it resolves to NO.
I will use my discretion in determining whether a result should be considered valid. Obvious cheating, such as including the test set in the training data, does not count. Only results that use a no-calculator restriction will count.
Get Ṁ200 play money
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ84 | |
2 | Ṁ80 | |
3 | Ṁ79 | |
4 | Ṁ67 | |
5 | Ṁ17 |
Sort by:
Seems like this already happened? https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html :O
Related questions
Will there be a model that has a 75% win rate against the latest iteration of GPT-4 as of January 1st, 2025?
43% chance
Will an AI get a perfect SAT score before 2025?
81% chance
Will any AI be able to formalize >=90% of IMO problems by the start of 2025?
32% chance
Will an AI model outperform 95% of Manifold users on accuracy before 2026?
67% chance
Will an AI be capable of achieving a perfect score on the Putnam exam before 2030?
33% chance
Will OpenAI Release a Model Capable of Reliably performing Gradeschool Math from Reasoning by Jan 1, 2025?
73% chance
Will reinforcement learning overtake LMs on math before 2028?
43% chance
Will OpenAI's next-gen math-focused model score at least 95% on the MATH benchmark?
62% chance
Will an AI be able to convert recent mathematical results into a fully formal proofs that can be verified by a mainstream proof assistant by 2025?
23% chance
Will an AI agent system be able to score at least 40% on level 3 tasks in the GAIA benchmark before 2025.
49% chance