This market matches Software Optimization: GSOBench from the AI 2026 Forecasting Survey by AI Digest.

Resolution criteria
Resolves to the highest Hack-Adjusted Opt@1 score on the GSO leaderboard as of December 31, 2026.
A task counts as solved if a single attempt achieves ≥95% of the human speedup AND passes all correctness tests. No retries or cherry-picking the best of multiple runs is allowed. This is referred to as the Opt@1 score.
If GSO releases an updated task set with comparable scope, this question will resolve according to the newest official version.
Which AI systems count?
Any AI system counts if it operates within realistic deployment constraints and doesn't have unfair advantages over human baseliners.
Tool assistance, scaffolding, and any other inference-time elicitation techniques are permitted as long as:
No unfair and systematic advantage. There is no systematic unfair advantage over the humans described in the Human Performance section (e.g. AI systems are allowed to have multiple outputs autograded while humans aren't, or AI systems have access to the internet when humans don't).
Human cost parity. Having the AI system complete the task does not use more compute than could be purchased with the wages needed to pay a human to complete the same task to the same level. Any additional costs incurred by the AIs or humans (such as GPU rental costs) are included in the parity estimation.
The PASS@k elicitation technique (which automatically grades and chooses the best out of k outputs from a model) is not accepted on this benchmark because the Opt@1 metric explicitly requires a single attempt with no retries or cherry-picking.
If there is evidence of training contamination leading to substantially increased performance, scores will be accordingly adjusted or disqualified.
If a model is released in 2026 but evaluated after year-end, the resolver may include it at their discretion (if they think that there was not an unfair advantage from being evaluated later, for example the scaffolding used should have been available within 2026).
Eli Lifland is responsible for final judgment on resolution decisions.
Human cost estimation process:
Rank questions by human cost. For each question, estimate how much it would cost for humans to solve it. If humans fail on a question, factor in the additional cost required for them to succeed.
Match the AI's accuracy to a human cost total. If the AI system solves N% of questions, identify the cheapest N% of questions (by human cost) and sum those costs to determine the baseline human total.
Account for unsolved questions. For each question the AI does not solve, add the maximum cost from that bottom N%. This ensures both humans and AI systems are compared under a fixed per-problem budget, without relying on humans to dynamically adjust their approach based on difficulty.
Buckets are left-inclusive: e.g., 30-40% includes 30.0% but not 40.0%.