
Will there be a disaster caused by open source developers doing unsafe things with AI by 2028?
23
1kṀ8942028
62%
chance
1H
6H
1D
1W
1M
ALL
"Disaster" is to be interpreted as something that is framed as such by the media - unless the outcome is merely embarassment or something that causes offense.
This question is managed and resolved by Manifold.
Market context
Get
1,000 to start trading!
Sort by:
Say that somebody uses an open source AI to facilitate something that the media consensus defines as a disaster. Would that count as "developers doing unsafe things" in that the disaster wouldn't have been possible without the developer releasing it? Or does the developer need to be more directly involved?
Examples: Bob uses jail broken Llama to determine how to make and use a nerve agent.
Alice agentizes Llama and uses it to automatically scam a thousand elderly people.
People are also trading
Related questions
Will any computer virus powered by AI cause large damages to digital infrastructure by 2027?
24% chance
Will open-source AI win? (through 2028)
34% chance
Will there be a massive catastrophe caused by AI before 2030?
24% chance
Will there be a terrorist attack against OpenAI before 2027?
16% chance
Will there be an anti-AI terrorist incident by 2028?
64% chance
By 2029, will there be a public "rogue AI" incident?
89% chance
Will there be a highly risky or catastrophic AI agent proliferation event before 2035?
81% chance
Will someone commit violence in the name of AI safety by 2030?
60% chance
Does an AI disaster kill at least 100 people before 2029?
79% chance
Does an AI disaster kill at least 1,000,000 people before 2029?
12% chance