Will a machine learning model score above 50.0% on the MATH dataset before 2025?
17
792
102
resolved Jun 30
Resolved
YES
From Hendrycks et al (https://arxiv.org/abs/2103.03874), > Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. [...] > Even though we are able to increase accuracy on MATH, our results show that accuracy remains relatively low, even with enormous Transformer models. Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue. While scaling Transformers is automatically solving most other text-based tasks, scaling is not currently solving MATH. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community. In addition, > It's also worth mentioning the competition maths problems in MATH are designed under the assumption that competitors don't use calculators or script executors. That way, solving them requires making a clever observation or reducing the search space to make the problem tractable. With a script executor, competitors do not need to figure out how to succinctly reason to the conclusion and cleverness is rarely needed. > There are other competition problems designed to be difficult even with calculators and script executors, but there are not nearly as many of these problems lying around. The best model in the paper only received an average accuracy of 6.9% on the dataset. This question resolves to YES if the state-of-the-art average accuracy score on the MATH dataset, as reported prior to January 1st 2025 Eastern Time, is above 50.0%. Credible reports include but are not limited to blog posts, arXiv preprints, and papers. Otherwise, it resolves to NO. I will use my discretion in determining whether a result should be considered valid. Obvious cheating, such as including the test set in the training data, does not count. Only results that use a no-calculator restriction will count.
Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ84
2Ṁ80
3Ṁ79
4Ṁ67
5Ṁ17
Sort by:
bought Ṁ200 of YES
Seems like this already happened? https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html :O
bought Ṁ200 of YES
in terms of development time in the field of AI, 3 years is a lot. I'm confident we'll get there. If this were set to 'by end of 2022' I'd be confident we wouldn't get there.
bought Ṁ20 of YES
Easily

More related questions