Background
OpenAI's o3 model reportedly achieves 25% accuracy on the FrontierMath benchmark, a significant achievement in mathematical problem-solving. However, it turns out they had access to the dataset while it was being created. Epoch AI, the creators of FrontierMath, are developing a holdout dataset specifically designed to test o3's capabilities without any prior exposure to the problems.
Resolution Criteria
This market will resolve YES if o3's performance on the FrontierMath holdout dataset is within 5 percentage points of its reported 25% accuracy on the main dataset (i.e., between 20-30% accuracy). It will resolve NO if:
The performance differs by more than 5 percentage points from the reported 25% accuracy
OpenAI declines to have o3 tested on the holdout dataset
The holdout dataset testing is not completed by December 31, 2024
Considerations
Elliot Glazer, lead mathematician at Epoch AI, believes OpenAI has no incentive to misrepresent their internal benchmarking but emphasizes the importance of independent verification
The holdout dataset is specifically being designed to prevent any potential advantages from prior exposure or overfitting
This will be one of the first major independent verifications of a leading AI model's mathematical capabilities