Will o3 perform as well on the FrontierMath holdout dataset as it does on the main set?
➕
Plus
7
Ṁ2400
Dec 31
69%
chance

Background

OpenAI's o3 model reportedly achieves 25% accuracy on the FrontierMath benchmark, a significant achievement in mathematical problem-solving. However, it turns out they had access to the dataset while it was being created. Epoch AI, the creators of FrontierMath, are developing a holdout dataset specifically designed to test o3's capabilities without any prior exposure to the problems.

Resolution Criteria

This market will resolve YES if o3's performance on the FrontierMath holdout dataset is within 5 percentage points of its reported 25% accuracy on the main dataset (i.e., between 20-30% accuracy). It will resolve NO if:

  • The performance differs by more than 5 percentage points from the reported 25% accuracy

  • OpenAI declines to have o3 tested on the holdout dataset

  • The holdout dataset testing is not completed by December 31, 2024

Considerations

  • Elliot Glazer, lead mathematician at Epoch AI, believes OpenAI has no incentive to misrepresent their internal benchmarking but emphasizes the importance of independent verification

  • The holdout dataset is specifically being designed to prevent any potential advantages from prior exposure or overfitting

  • This will be one of the first major independent verifications of a leading AI model's mathematical capabilities

Get
Ṁ1,000
and
S3.00
Sort by:

All questions of overfitting etc aside, I'd bet it's probably tricky to norm the two sets given the nature of the problems. +-5% seems very plausible on that basis alone.

  • The holdout dataset testing is not completed by December 31, 2024

2025, presumably

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules