Skip to main content
MANIFOLD
Will a major tech company publicly pause or limit AI development due to safety concerns before January 1, 2027?
10
Ṁ100Ṁ296
Dec 31
30%
chance

Resolution criteria

This market resolves YES if a major tech company (defined as a company with market capitalization exceeding $100 billion or annual revenue exceeding $50 billion) publicly announces a pause or meaningful limitation on AI development activities due to safety concerns before January 1, 2027. The announcement must explicitly cite safety as the primary reason and involve a concrete action (e.g., halting training of specific models, delaying product releases, or reducing AI R&D investment). Statements about "prioritizing safety" or implementing safety measures without pausing development do not qualify. Resolution sources include official company announcements, SEC filings, press releases, and statements from company leadership reported by major news outlets.

Background

As of July 2024, a pause on AI development has not been realized—instead, AI companies have directed "vast investments in infrastructure to train ever-more giant AI systems". In May 2024, OpenAI dissolved its AI safety team days after the resignations of its two AI safety leaders, signaling that major companies have not moved toward development pauses. For the first time, all three major AI companies released models with heightened safeguards after pre-deployment testing couldn't rule out that systems could meaningfully help novices develop biological weapons, indicating companies are implementing safety measures rather than pausing development. The steady increase in capabilities is severely outpacing any expansion of safety-focused efforts, with this widening gap between capability and safety leaving the sector structurally unprepared for the risks it is actively creating.

Considerations

New York's RAISE Act takes effect January 1, 2027, which may create regulatory pressure on AI companies, but regulatory compliance does not constitute a safety-motivated pause. The distinction between implementing safety measures (which companies are doing) and pausing development (which has not occurred) is critical for resolution.

Market context
Get
Ṁ1,000
to start trading!
Sort by:
bought Ṁ10 YES

https://www.anthropic.com/glasswing says:

We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale—for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring. To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model’s most dangerous outputs.

I'm not sure this is a YES yet but it's a strong sign that at least safety-related statements of restricted release are a thing that the companies will make.