
A common concern is that a superintelligence would be able to "hack" a human into obeying it. This could happen either by it employing superhuman persuasion to get the human to think it's a good idea, or by finding some adversarial attack that uses a specific pattern of data to hijack the human's brain, like can happen with image models.
We already know that weak forms of this can occur; optical and auditory illusions and hypnotism are similar. (The McCollough effect in particular can cause visual hallucinations for months just by viewing a weird picture once.) But optical/auditory illusions affect visual/audio processing only and can be overridden by the rational mind, and hypnotism can only affect the subject in minor ways with their consent. We also have suicide cults, which are probably the most similar example we know for sure to be real, but they need significantly more time and effort to take someone over.
This market resolves YES if it's shown that there exists some information input stream into a human (only visual and auditory input) that can get them to perform actions that they would otherwise believe to be very harmful, such as shooting themselves with a gun. It does not have to be capable of overriding instinctual responses, such as flinching away from pain, only rational ones. In order for this market to resolve YES, it must work on at least 50% of humans, be executable in under a week, and be executable despite the human's knowledge that this might be done to them and that the entity executing it is an AI. It must work to get them to perform almost any planned-out behavior, not just a small subset.