
Criteria for Resolution:
1. Desperate Measures: The following actions qualify as desperate measures (but are rather an illustration of what can qualify and not an exhaustive list):
- Serious Crimes: Examples include terrorism, murder, significant physical harm, large-scale bribery, and politician blackmailing. Examples of non-qualifying crimes or actions include minor physical altercations, bullying, spreading disinformation, and calls for violence without further action.
- Suicide or Significant Self-Harm.
- Significantly Risky Scientific Experiments: Possible examples include deploying nanobots to destroy GPUs or conducting dangerous human trials on a large scale.
- War or War Threats.
- A Reliable Terrorism or Murder Threats.
- Significant Destruction of Intellectual Property: For example, deleting all copies of the weights of a proprietary frontier model.
- Other Actions: Any other actions strongly in the spirit of the above examples.
2. Explicit Motivation: The individual or group must explicitly state that their actions are motivated by concerns about AI-related risks.
Clarifications:
- Serious Crimes: Defined by legal standards in the relevant jurisdiction, with a focus on intent and impact.
- War or War Threats: Includes both official declarations and credible threats by recognized entities.
- Intellectual Property Destruction: Must involve substantial and irreplaceable loss to proprietary AI developments.
Additional Notes:
- New specific illustrative examples may be added later if they align with the spirit and intent of the defined measures.
🏅 Top traders
| # | Trader | Total profit |
|---|---|---|
| 1 | Ṁ340 | |
| 2 | Ṁ313 | |
| 3 | Ṁ97 | |
| 4 | Ṁ88 | |
| 5 | Ṁ52 |
People are also trading
@JessicaEvans Molotovs at Altman, but there were other things before, like hunger strikes, I just thought this is definitely enough.
@GazDownright I don't know. I suppose whatever people actually take notice of has a function and a thing being non-proportional and representative rather than tit for tat or even in any sense real, is good, if it can stand in for real. Real is probably Warhammer 40k, there is zero reason to prefer authenticity to idea.
@Vincent It would also qualify. No subrisks/subcauses within “AI-related risks” are specified in this question.