Resolves yes when I hear of a human-level or greater agenty AI breaking out of the box and doing bad things.
Will try to resolve YES when incident is still in progress and we don't yet know whether it wipes out humans or not.
I will not bet on this market
@YoavTzfati GPT-5 giving a human instructions on how to hack in to the pentagon would not count. The AI has to actually do the thing in the real world. The thing has to be generally recognized as very bad, and contrary to the intentions of the programmers, and "on purpose". A self-driving car driving off a cliff would not count if it just didn't see the drop, but would count if it was assassinating someone on purpose.
@JonathanRay Thanks! so basically murdering a single human on purpose is bad enough to count. I'll sell some of my no shares 😅
@YoavTzfati Oh, and that a self driving car is considered "powerful", given that it's able to decide to kill someone
@YoavTzfati Most self driving cars would not satisfy the “human level or greater” criterion. But if one did, and it first or second degree murdered someone, that would count. Negligence or accidents or inevitable trade-offs during a collision where it can’t save both drivers, would not count.
Or say darpa has an airgapped datacenter training powerful AIs, and one AI hacks and takes over the data center, and so darpa cuts all power to the datacenter, disassembles everything, and analyzes the hard drives in airgapped research facilities to figure out what went wrong.