Resolution criteria
This market resolves YES if any country enacts a law that explicitly bans the development, deployment, or use of "advanced AI systems" before January 1, 2027. The ban must be a binding legal prohibition, not merely regulatory restrictions, licensing requirements, or risk-based compliance frameworks.
For resolution purposes:
"Advanced AI systems" refers to frontier or general-purpose AI models with significant capabilities (e.g., systems comparable to GPT-4 or similar large language models)
The ban must apply to the entire country or a substantial portion of its territory
Sector-specific restrictions (e.g., bans on AI in hiring or biometric surveillance) do not count as a country-wide ban on advanced AI systems
Regulatory delays, compliance timelines, or implementation postponements do not constitute a ban
Resolution will be verified through official government sources, legislative databases, or major news outlets covering the enactment of such legislation.
Background
The EU AI Act officially became law on August 1, 2024, with implementation staggered from early 2025 onwards. However, the EU's risk-based approach categorizes AI systems into four risk levels, with unacceptable-risk AI systems banned entirely, including applications that manipulate users, exploit vulnerabilities, or enable mass biometric surveillance. This is regulatory restriction rather than a blanket ban on advanced AI development.
The United States lacks comprehensive federal legislation on AI, with regulation occurring through agency-specific guidelines and a growing body of state-level AI regulation. The U.S. has been at the forefront of deregulatory trends, with executive orders from President Donald Trump seeking to remove barriers to AI development and stimulate innovation.
Most major jurisdictions are pursuing regulatory frameworks rather than outright bans. Different jurisdictions push different models of AI regulation—some rights-first, some innovation-first, some control-first.
Considerations
No country has enacted a comprehensive ban on advanced AI systems as of March 2026. According to Stanford University's 2025 AI Index, legislative mentions of AI rose 21.3% across 75 countries since 2023, marking a ninefold increase since 2016. The global trend favors regulation and governance frameworks over outright bans, with most policymakers balancing innovation concerns against safety considerations.
I'm taking this to mean banning the use or development of all of at least one of:
• LLM's
• generative image models
• LLM's beyond some level of capability
• generative image models beyond some level of capability
And not just banning:
• Some aspect, such as requiring users to be over 16 to use them.
• Some companies/products, for example companies products that refuse to comply with regulations.
• Beyond a certain amount of recourse use each period, for example a companies grid electricity use each period over a certain area.
I'm also assuming that internationally recognised countries count, so it could be a tiny island country with just a few hundred people on it.
I assume North Korea, where computers are banned for almost the whole population, is just not applicable to this question.