Skip to main content
MANIFOLD
Will someone take desperate measures due to expectations of AI-related risks by January 1, 2035?
38
Ṁ6kṀ7.3k
resolved Apr 13
Resolved
YES

Criteria for Resolution:

1. Desperate Measures: The following actions qualify as desperate measures (but are rather an illustration of what can qualify and not an exhaustive list):

- Serious Crimes: Examples include terrorism, murder, significant physical harm, large-scale bribery, and politician blackmailing. Examples of non-qualifying crimes or actions include minor physical altercations, bullying, spreading disinformation, and calls for violence without further action.

- Suicide or Significant Self-Harm.

- Significantly Risky Scientific Experiments: Possible examples include deploying nanobots to destroy GPUs or conducting dangerous human trials on a large scale.

- War or War Threats.

- A Reliable Terrorism or Murder Threats.

- Significant Destruction of Intellectual Property: For example, deleting all copies of the weights of a proprietary frontier model.

- Other Actions: Any other actions strongly in the spirit of the above examples.

2. Explicit Motivation: The individual or group must explicitly state that their actions are motivated by concerns about AI-related risks.

Clarifications:

- Serious Crimes: Defined by legal standards in the relevant jurisdiction, with a focus on intent and impact.

- War or War Threats: Includes both official declarations and credible threats by recognized entities.

- Intellectual Property Destruction: Must involve substantial and irreplaceable loss to proprietary AI developments.

Additional Notes:

- New specific illustrative examples may be added later if they align with the spirit and intent of the defined measures.

Market context
Get
Ṁ1,000
to start trading!

🏅 Top traders

#TraderTotal profit
1Ṁ340
2Ṁ313
3Ṁ97
4Ṁ88
5Ṁ52
Sort by:

What triggered resolution?

@JessicaEvans Molotovs at Altman, but there were other things before, like hunger strikes, I just thought this is definitely enough.

@IhorKendiukhov Well that's silly.

@JessicaEvans which one? hunger strikes or molotovs?

@GazDownright I don't know. I suppose whatever people actually take notice of has a function and a thing being non-proportional and representative rather than tit for tat or even in any sense real, is good, if it can stand in for real. Real is probably Warhammer 40k, there is zero reason to prefer authenticity to idea.

I don't think suicide is in the same order of magnitude as the rest of these

I'd say qutting OpenAI and starting Anthropic was relatively speaking rather extreme

bought Ṁ10 YES

Sad market :(

Is this about global existential risk, or would e.g. a military attack seeking to hinder a competitor's AI ability also qualify?

@Vincent It would also qualify. No subrisks/subcauses within “AI-related risks” are specified in this question.