AI safety activist causes significant AI catastrophe to slow capabilities development?
2
100Ṁ125
2040
16%
chance

Resolves YES if there is strong evidence that one or more AI safety activists intentionally caused (planned, directed, or materially supported) a significant AI‑related catastrophe in order to slow down AI capabilities development, before January 1st 2040.

Definitions

  • AI safety activist: An individual or group who, prior to the incident, publicly and persistently advocated for AI safety or for slowing/pausing AI capabilities (e.g., through campaigns, organizations, public statements, or advocacy work). This includes but is not limited to self‑identification or consistent characterization by reputable sources.

  • AI‑related catastrophe (AI‑caused): The AI system’s design, output, operation, or misuse was a necessary cause of the harms (not merely incidental or background). Non‑AI sabotage (e.g., vandalizing a data center) does not qualify unless the catastrophic harm itself was caused by an AI system.

  • Significant: Any one of the following is met:

    1. Human harm: At least 10 deaths directly attributable to the AI‑caused incident; or

    2. Critical infrastructure: Official designation by a competent authority that the event constitutes a catastrophic incident or a significant cyber incident (or close national equivalent), or (for EU cases) a serious incident involving death or serious and irreversible disruption of critical infrastructure; or

    3. Economic loss: Credible governmental or widely recognized insurance/analytics estimates of ≥ $1 billion in direct losses.
      Notes: Official categorizations—where available—take precedence; the numeric thresholds above are included solely for market clarity.

  • Intent (motive) requirement: There must be direct evidence that a primary purpose of the perpetrators was to slow or halt AI capabilities development (e.g., explicit statements/communications, findings of an official inquiry, a conviction/guilty plea establishing motive). General anti‑tech sentiment is not enough.

Evidence standard (“strong evidence”)

Any one of the following qualifies:

  • A final court conviction or guilty plea naming perpetrator(s) and motive as above;

  • A public report by a competent government investigative body concluding both perpetrator identity (as AI safety activists) and the motive to slow capabilities;

  • An on‑record admission by perpetrator(s) plus corroborating forensic/attribution evidence and confirmation by at least three independent, reputable outlets;

  • An equivalent authoritative determination (e.g., parliamentary inquiry or official commission) meeting the same bar.

Exclusions (does not resolve YES if):

  • Accidents, negligence, model misconfiguration, or safety research gone wrong without the motive above;

  • Purely economic or reputational pressure (e.g., boycotts, sit‑ins) without AI‑caused catastrophic harm.

Timing and resolution mechanics

  • The incident must occur before January 1st 2040. Evidence may surface later; the market may wait up to 12 months after the incident for qualifying evidence. If, after that window, the criteria are still unmet, resolve NO.

  • N/A only if: (a) competent authorities explicitly state conclusive attribution cannot be made, or (b) the available evidence is irreconcilably contradictory after good‑faith review.

  • Sources used for resolution will prioritize official reports, court records, and multi‑outlet independent reporting. The creator will adhere to these criteria and document the reasoning at resolution.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy