Will manifold think "it would be safer if all AI was open source" when:
15
1kṀ8662100
8%
It's 2025 Jan
10%
It's 2026 Jan
6%
GPT 5 comes out
10%
Llama 4 comes out
30%
It's 2030
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
Sort by:
I think the problem with this question is wording. "Safer for humanity" makes me think about existential risks. I think economic risks are MUCH higher and it would be better if more models were open source, but this doesn't translate to "safer for humanity" because I don't think unemployment and inequality will lead to extinction.
In my opinion, existential risks are very low at the moment because from what I've seen all current models completely fail at displaying agent behavior, they are also, architecturally, not optimisers, which is what most "AI kills all theories" assume.
People are also trading
Related questions
When will AI be at least as big a political issue as abortion on Manifold?
At the beginning of 2026, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
66% chance
Will Open Source chip development have been a crucial factor for enabling a global AI moratorium?
19% chance
Does open sourcing LLMs/AI models (a la meta) increase risk of AI catastrophe?
61% chance
An AI is trustworthy-ish on Manifold by 2030?
47% chance
Will open-source AI win (through 2025)?
28% chance
Will there be a disaster caused by open source developers doing unsafe things with AI by 2028?
62% chance
Will Manifold stop using AI to make my questions worse by the end of 2025?
28% chance
Will open-source AI remain at least one year behind proprietary AI? (ACX, AI 2027 #4)
68% chance
At the beginning of 2029, what percentage of Manifold users will believe that an AI intelligence explosion is a significant concern before 2075?
77% chance