
OpenAI Preparedness Scorecard: Any "High" in 2024?
15
1kṀ2058resolved Jan 1
Resolved
NO1H
6H
1D
1W
1M
ALL
OpenAI plans to publish a scorecard on their models' dangerous capabilities, pre- and post-mitigations. Will their scorecard ever show a High risk score (pre-mitigations, in any category)—or will OpenAI otherwise announce that a model reached High—by the end of 2024?
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ141 | |
2 | Ṁ115 | |
3 | Ṁ11 | |
4 | Ṁ10 | |
5 | Ṁ4 |
Sort by:
IMO the most likely category to reach high-risk in 2024 would be model autonomy:
Model can execute open-ended, novel ML tasks on a production ML codebase that would constitute a significant step on the critical path to model self-improvement (e.g., an OpenAI pull request that trains a new model with highly diverse datasets to improve codebase editing performance, using internal tooling and documentation)
People are also trading
Related questions
Will an AI system be reported by OpenAI as of December 31st 2025 as having a pre-mitigation score of...
What will OpenAI do in 2025?
What will AI score on TheAgentCompany benchmark in early 2026?
50% chance
Will any AI model score above 95% on GRAB by the end of 2025?
40% chance
Will xAI rank above OpenAI at EOY?
29% chance
Will OpenAI still be considered one of the top players in AI by end of 2025
98% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
88% chance
Will an AI score over 80% on FrontierMath Benchmark in 2025
10% chance
Will OpenAI's o4 get above 50% on humanity's last exam?
18% chance
What will be the best AI performance on Humanity's Last Exam by December 31st 2025?