Software Intelligence Explosion by 2040?
2
100Ṁ75
2039
43%
chance

Resolves as YES if there is strong evidence that, before January 1st 2040, there exists at least one contiguous 5‑year period (fully contained between market creation and January 1st 2040) during which frontier AI systems undergo a “software intelligence explosion” as defined below.

For this market, a software intelligence explosion (SIE) means that, over that 5‑year window:

  1. Multi‑order‑of‑magnitude software efficiency gains.

    • Algorithmic / software advances (including model architectures, training methods, data generation and curation techniques, optimization methods, inference‑time algorithms, scaffolding, agentic systems, etc.) yield at least ~two orders of magnitude (≥100×) improvement in effective capabilities at a given hardware cost, as measured by retrospective “effective compute” or efficiency analyses across key general‑purpose AI domains (especially language models and other broadly capable systems).

    • “Effective capabilities” here means that, holding hardware and training budget fixed, state‑of‑the‑art systems at the end of the window are vastly more capable than comparable systems at the start, in ways that matter for general reasoning, science/engineering work, and AI R&D.

  2. Self‑driving AI R&D (AI improving AI).

    • By the later part of the window, AI systems are performing a large majority of the cognitive labor involved in frontier AI research and development, including: proposing new architectures and training schemes, writing and refactoring most of the relevant code, designing and prioritizing experiments, generating and curating synthetic data, and interpreting results.

    • Humans remain involved (e.g. for goal‑setting, high‑level direction, safety and governance), but the bulk of day‑to‑day algorithmic and experimental work at the frontier is done by AI systems themselves. As a rough guide, think ≥50–80% of effective R&D “brainpower” coming from AI rather than humans at leading labs.

Hardware progress (better chips, more datacenters, higher training budgets) is allowed and expected in YES worlds. However, the spirit of the question is that the explosive dynamic is primarily software‑driven and AI‑automated. If, in hindsight, the main story of AI progress in every 5‑year period before 2040 is “we bought a lot more compute and scaled known techniques,” with only modest, human‑driven algorithmic progress and limited AI automation of AI R&D, this market should resolve NO.


Evidence the resolver should look for in a YES world

By early 2040, the market should resolve YES if a reasonable, well‑informed observer would agree that there was at least one 5‑year window before 2040 where both of the above conditions clearly held. Relevant kinds of evidence include (not all strictly required, but several should be present):

  • Retrospective efficiency estimates from major labs, academic papers, or measurement/forecasting orgs indicating that software/algorithmic progress alone reduced the compute required to reach a given broad capability level by ≥100× relative to 2025‑era baselines, after controlling for hardware improvements and training budget.

  • Analyses of AI R&D workflows showing that AI systems are doing most of the intellectual heavy lifting in frontier AI research (for example, major labs reporting that most code merged into core training stacks was first authored by AI tools; most new algorithmic ideas and experiment designs were proposed by AI agents; or internal audits showing most researcher‑time is spent supervising and steering AI research workers rather than doing low‑level work themselves).

  • Mainstream expert/institutional consensus (e.g. in technical surveys, major lab retrospectives, policy analyses) that there was a period of extremely rapid, self‑accelerating software‑driven AI progress that is commonly described as a “software intelligence explosion” (or a very close synonym like “runaway algorithmic efficiency boom” or “explosive software takeoff”), backed by quantitative evidence rather than just rhetorical flourish.

If such a period exists, the resolver should pick the most favourable contiguous 5‑year span (e.g. 2031–2036 rather than 2030–2035) when checking whether the criteria above are met.


Examples that should NOT count as YES

This market should resolve NO if, by early 2040, best available evidence indicates that no 5‑year period between market creation and January 1st 2040 satisfies both the multi‑OOM software gains and self‑driving AI‑R&D conditions. Examples that do not qualify:

  • Hardware‑dominated growth: Frontier AI capabilities mostly track increases in training compute and spending; algorithmic improvements and tooling continue at roughly historical rates, and AI assistants are used mainly as code copilots, doc summarizers, or debugging helpers, with humans still originating most key research ideas and experiment designs.

  • Narrow explosions only: There is an “explosion” in a narrow area (e.g. chip layout, protein folding, or game‑playing) but not in general‑purpose AI systems used for broad science and engineering work and AI R&D itself.

  • Moderate automation, no feedback loop: AI tools meaningfully increase researcher productivity (say 2–5×) but do not come to do the large majority of frontier AI R&D work, and there is no clear positive feedback where AI‑generated algorithmic breakthroughs quickly enable further major breakthroughs by even more capable AI researchers.

  • Strong progress without clear attribution to software + AI‑R&D automation: Even if AI capabilities keep improving rapidly, if it is ambiguous whether software‑driven, AI‑automated feedback loops were a major causal driver (as opposed to scaling compute and human‑initiated ideas), the default should be to resolve NO rather than stretching the definition of SIE.


Timing and resolution authority

  • The “5‑year window” means any continuous period of length 5 years whose start and end dates are both before January 1st 2040 (e.g. 2026–2031, 2028–2033, 2032–2037, 2034–2039, etc.).

  • The market creator (or appointed successor) will resolve the question based on the totality of public evidence available by the time of resolution.

  • The market may resolve early to YES if, prior to 2040, there is broad recognition that a software intelligence explosion (as defined above) has already occurred and sufficient evidence is available. Otherwise, it should resolve by some reasonable time after January 1st 2040, once retrospective analyses of the late‑2030s are accessible.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy