Will someone take desperate measures due to expectations of AI-related risks by January 1, 2035?
➕
Plus
38
Ṁ3907
2034
91%
chance

Criteria for Resolution:

1. Desperate Measures: The following actions qualify as desperate measures (but are rather an illustration of what can qualify and not an exhaustive list):

- Serious Crimes: Examples include terrorism, murder, significant physical harm, large-scale bribery, and politician blackmailing. Examples of non-qualifying crimes or actions include minor physical altercations, bullying, spreading disinformation, and calls for violence without further action.

- Suicide or Significant Self-Harm.

- Significantly Risky Scientific Experiments: Possible examples include deploying nanobots to destroy GPUs or conducting dangerous human trials on a large scale.

- War or War Threats.

- A Reliable Terrorism or Murder Threats.

- Significant Destruction of Intellectual Property: For example, deleting all copies of the weights of a proprietary frontier model.

- Other Actions: Any other actions strongly in the spirit of the above examples.

2. Explicit Motivation: The individual or group must explicitly state that their actions are motivated by concerns about AI-related risks.

Clarifications:

- Serious Crimes: Defined by legal standards in the relevant jurisdiction, with a focus on intent and impact.

- War or War Threats: Includes both official declarations and credible threats by recognized entities.

- Intellectual Property Destruction: Must involve substantial and irreplaceable loss to proprietary AI developments.

Additional Notes:

- New specific illustrative examples may be added later if they align with the spirit and intent of the defined measures.

Get
Ṁ1,000
and
S3.00
Sort by:

I don't think suicide is in the same order of magnitude as the rest of these

I'd say qutting OpenAI and starting Anthropic was relatively speaking rather extreme

bought Ṁ10 YES

Sad market :(

Is this about global existential risk, or would e.g. a military attack seeking to hinder a competitor's AI ability also qualify?

@Vincent It would also qualify. No subrisks/subcauses within “AI-related risks” are specified in this question.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules