MANIFOLD
Will there be an international moratorium on frontier AGI development by EOY 2033?
5
Ṁ10kṀ3.1k
2033
23%
chance

This market resolves YES if, before January 1, 2034, there is a formal, publicly announced international agreement or coordinated policy that constitutes a moratorium on frontier AGI development, meeting the criteria below.

The concept of “moratorium” here refers to a halt or prohibition on training or deploying frontier AI systems at or beyond certain established threshold (may it be capabilies, scale, speed, or something else meanigfully correlated with general capabilities), combined with serious enforcement mechanisms, not merely voluntary guidelines or safety recommendations.

Core Resolution Criteria

The market resolves YES if ALL of the following conditions are met:

1. Substance: Actual Moratorium

A policy, treaty, or coordinated set of national policies explicitly aims to:

  • Pause, ban, or indefinitely halt the training of frontier AI systems beyond a defined capability threshold

  • OR impose strict compute or capability limits clearly intended to prevent AGI-level systems

The restriction must be binding or enforced, not merely advisory.

2. Scope: Frontier / AGI-Relevant Systems

The moratorium must apply to frontier AI systems, meaning at least one of or something reasonably equivalent to:

  • Systems trained with very large-scale compute (e.g., beyond a defined FLOP threshold or similar proxy)

  • Systems described by policymakers as AGI, transformative AI, or posing existential/global catastrophic risk, with clear threshold benchmarks

  • Systems exceeding current SOTA in a way that materially advances toward general intelligence

A ban limited to narrow applications (e.g., autonomous weapons only) does not qualify.

3. Actors: Major Technological Powers

At least one of the following must hold:

Option A (Global Agreement):

  • The moratorium is adopted by all major technological powers, including at minimum:

    • United States

    • China

    • European Union

    • United Kingdom

OR

Option B (Coercive Coalition):

  • A subset of major technological powers (e.g., US/EU or US-led coalition) adopts the moratorium and explicitly commits to enforcement against non-participants, including:

    • Diplomatic pressure (sanctions, export controls, etc.)

    • AND credible statements of escalatory enforcement, which may include cyber or military measures targeting non-compliant AGI efforts

This clause is intended to capture the “we will stop this globally, even if others resist” scenario.

4. Enforcement: Non-Trivial and Credible

The policy must include real enforcement mechanisms, such as:

  • Compute monitoring, licensing, or chip controls

  • Inspections or verification regimes

  • Sanctions or penalties for violations

  • Explicit plans to prevent or shut down non-compliant training runs

Purely symbolic agreements or unenforced declarations do not qualify.

Edge Cases / Interpretation Notes

  • If there is ambiguity about whether a system qualifies as “frontier,” resolution should rely on explicit framing by governments or leading AI labs

  • If enforcement intent is unclear, resolution should default to NO unless there is explicit evidence of coercive enforcement planning

  • Public statements by heads of state or binding legislation carry more weight than informal remarks

Market context
Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy