Will many open source advocates agree frontier models have crossed capability thresholds too dangerous to open source?
56
1kṀ22k2026
10%
chance
1H
6H
1D
1W
1M
ALL
...within two years (e.g. by 6/1/26)?
This is Sam Hammond's Thesis IV/2 from https://www.secondbest.ca/p/ninety-five-theses-on-ai.
I will consider many to mean at least 50%.
If in my subjective judgment this is unclear I will ask at least one open source advocate for their view on how this resolves, and act accordingly.
Resolves to YES if in my subjective judgment, at least half of those who in May 2024 were 'open source advocates,' weighted by prominence, agree that it would be too dangerous to release the weights of what are then SoTA frontier models.
Resolves to NO if this is not true.
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
Will there be a disaster caused by open source developers doing unsafe things with AI by 2028?
62% chance
Will manifold think "it would be safer if all AI was open source" when:
Will three or more Frontier AI Labs issue a joint statement committing to constrain their AI's capabilities before 2026?
14% chance
[Carlini questions] Delay from the best "closed-source" model release to it being reproduced in "open source" in 2030
14.0
Will closed source software go extinct within 5 years (2029)?
7% chance
By 2026, will Openai commit to delaying model release if ARC Evals thinks it's dangerous?
13% chance
Will Open Source chip development have been a crucial factor for enabling a global AI moratorium?
19% chance
When will a non-Transformer model become the top open source LLM?
Will the US implement software export controls for frontier AI models by 2028?
77% chance