What will be the best OpenAI-Proof Q&A score by Dec 31, 2026?
3
10kṀ8445
2027
8%
<10%
12%
10-20%
14%
20-30%
16%
30-40%
15%
40-50%
12%
50-60%
9%
60-70%
7%
70-80%
5%
80-90%
4%
≥90%

This market matches AI Research: OpenAI-Proof Q&A from the AI 2026 Forecasting Survey by AI Digest.

Resolution criteria

Resolves to the best reported performance on OpenAI-Proof Q&A as of December 31, 2026.

If OpenAI changes the task set, use the latest official reported version. If no 2026 results are published, the question resolves as ambiguous.

Which AI systems count?

Any AI system counts if it operates within realistic deployment constraints and doesn't have unfair advantages over human baseliners.

Tool assistance, scaffolding, and any other inference-time elicitation techniques are permitted as long as:

  • No unfair and systematic advantage. There is no systematic unfair advantage over the humans described in the Human Performance section (e.g. AI systems are allowed to have multiple outputs autograded while humans aren't, or AI systems have access to the internet when humans don't).

  • Human cost parity. Having the AI system complete the task does not use more compute than could be purchased with the wages needed to pay a human to complete the same task to the same level. Any additional costs incurred by the AIs or humans (such as GPU rental costs) are included in the parity estimation.

The PASS@k elicitation technique (which automatically grades and chooses the best out of k outputs from a model) is a common example that we do not accept on this benchmark because human software engineers solving research and engineering bottlenecks generally do not have access to scoring metrics indicating whether they have successfully solved the issue. OpenAI has thus far only graded PASS@1 submissions.

Browsing is allowed.

If there is evidence of training contamination leading to substantially increased performance, scores will be accordingly adjusted or disqualified.

If a model is released in 2026 but evaluated after year-end, the resolver may include it at their discretion (if they think that there was not an unfair advantage from being evaluated later, for example the scaffolding used should have been available within 2026).

Eli Lifland is responsible for final judgment on resolution decisions.

Human cost estimation process:

  1. Rank questions by human cost. For each question, estimate how much it would cost for humans to solve it. If humans fail on a question, factor in the additional cost required for them to succeed.

  2. Match the AI's accuracy to a human cost total. If the AI system solves N% of questions, identify the cheapest N% of questions (by human cost) and sum those costs to determine the baseline human total.

  3. Account for unsolved questions. For each question the AI does not solve, add the maximum cost from that bottom N%. This ensures both humans and AI systems are compared under a fixed per-problem budget, without relying on humans to dynamically adjust their approach based on difficulty.

Buckets are left-inclusive: e.g., 20-30% includes 20.0% but not 30.0%.

Market context
Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy