
OpenAI Preparedness Scorecard: Any "High" in 2024?
15
1kṀ2058resolved Jan 1
Resolved
NO1D
1W
1M
ALL
OpenAI plans to publish a scorecard on their models' dangerous capabilities, pre- and post-mitigations. Will their scorecard ever show a High risk score (pre-mitigations, in any category)—or will OpenAI otherwise announce that a model reached High—by the end of 2024?
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ141 | |
2 | Ṁ115 | |
3 | Ṁ11 | |
4 | Ṁ10 | |
5 | Ṁ4 |
Sort by:
IMO the most likely category to reach high-risk in 2024 would be model autonomy:
Model can execute open-ended, novel ML tasks on a production ML codebase that would constitute a significant step on the critical path to model self-improvement (e.g., an OpenAI pull request that trains a new model with highly diverse datasets to improve codebase editing performance, using internal tooling and documentation)
People are also trading
Related questions
Will any AI model score >80% on Epoch's Frontier Math Benchmark in 2025?
17% chance
Will an AI score over 80% on FrontierMath Benchmark in 2025
22% chance
Will an AI system be reported by OpenAI as of December 31st 2025 as having a pre-mitigation score of...
Will AI image generating models score >= 90% on Winoground by June 1, 2025?
76% chance
Will any AI model score above 95% on GRAB by the end of 2025?
42% chance
Will xAI rank above OpenAI at EOY?
23% chance
Will OpenAI still be considered one of the top players in AI by end of 2025
97% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
84% chance
Will OpenAI's o4 get above 50% on humanity's last exam?
17% chance
What will be the best AI performance on Humanity's Last Exam by December 31st 2025?