Will there be a disaster caused by open source developers doing unsafe things with AI by 2028?

"Disaster" is to be interpreted as something that is framed as such by the media - unless the outcome is merely embarassment or something that causes offense.

Get Ṁ600 play money
Sort by:

Say that somebody uses an open source AI to facilitate something that the media consensus defines as a disaster. Would that count as "developers doing unsafe things" in that the disaster wouldn't have been possible without the developer releasing it? Or does the developer need to be more directly involved?

Examples: Bob uses jail broken Llama to determine how to make and use a nerve agent.

Alice agentizes Llama and uses it to automatically scam a thousand elderly people.

AutoGPT already exists, so 80%+ yes.

More related questions