
Will there be substantive issues with Safe AI’s claim to forecast better than the Metaculus crowd, found before 2025?
102
1kṀ58kresolved Jan 31
Resolved as
85%1H
6H
1D
1W
1M
ALL
https://www.safe.ai/blog/forecasting
Some kind of way it is not true in a common sense way. Things that would resolve this yes (draft):
We look at the data and it turns out that information was leaked to the LLM somehow
The questions were selected in a way that chose easy questions
The date of forecast was somehow chosen so as to benefit the LLM
This doesn't continue working over the next year of questions (more accurately than last year of metaculus crowd, ie the crowd can't win because it gets more accurate)
The AI was just accessing forecasts and parroting them.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ410 | |
2 | Ṁ253 | |
3 | Ṁ231 | |
4 | Ṁ223 | |
5 | Ṁ214 |
People are also trading
Related questions
Will one of these AI researchers claim we're in an AI winter before 2026?
7% chance
Will there be serious AI safety drama at Meta AI before 2026?
45% chance
Will we have better-than-human-aggregate forecasting AIs by the end of 2024?
4% chance
Will Metaculus still exist and have active forecasting throughout 2025?
93% chance
Will Anthropic be the best on AI safety among major AI labs at the end of 2025?
87% chance
In January 2026, how publicly salient will AI deepfakes/media be, vs AI labor impact, vs AI catastrophic risks?
Will AI progress surprise Metaculus?
77% chance
Will I (@Bayesian) be a superforecaster before 2027?
23% chance
Will AI be considered safe in 2030? (resolves to poll)
72% chance
Will ANY of the three major “AI 2027” predictions come true?
18% chance