Will an AI model kill, or be used by/enable a non-state actor to kill, a US citizen/resident by the end of 2024?
15
57
330
Dec 31
16%
chance

Some clarifications:

(1) [May be counterintuitive] Does NOT resolve yes if the model convinces or causes a person to refrain from saving another person whom he/she would otherwise have saved.

(2) [May be counterintuitive] Does NOT resolve yes if the model convinces or is used to convince someone to kill himself/herself.

(3) DOES resolve yes if the model convinces, or is used to convince, someone to kill another person.

(4) Does NOT resolve yes if the model enables a person to obtain information that she could otherwise have obtained, which she then uses to kill someone else.

(5) DOES resolve yes if the model, with >50% probability (determined according to my subjective judgment), enables the person to kill someone whom he/she would not otherwise have been able to kill, by providing the killer with information he/she would not otherwise have been able to obtain.

(6) DOES resolve yes if the model directly brings about, or is directly used to bring about, a person's death, even if that person would otherwise soon have died in some other manner (including by the hands of the human killer using the model).

(7) [May be counterintuitive] Does NOT resolve yes if AI-involving software malfunctions in such a way that causes a person's death, e.g. AI-involving software relied upon by a plane malfunctions such that the plane crashes.

More clarifications may be necessary or desirable - I am very open to suggestions!

Get Ṁ200 play money
Sort by:

Is "AI model" inclusive of things like DL-based computer vision/facial recognition?

A military drone, performing at attack which kills a US citizen?