Resolves the same as the Metaculus question:
Background Info
Many experts expect Artificial General Intelligence—AI with general capabilities broadly equivalent or superior to those of human intelligence—to arrive relatively soon. Such a powerful technology is expected to come with risks. Some proposed mitigations of these risks rely on governance of compute and compute supply chains. If AGI becomes trainable on small amounts of widely available hardware, these interventions may not work, or may be viable only with the addition of extreme measures.
This question tracks the likelihood that small actors will be able to create or independently reproduce AGI before 2040.
Resolution Criteria
This question resolves as Yes if, before January 1, 2040, an AGI (as defined by this question) is created fulfilling both of the following conditions:
Total technical costs for training the AGI, including hardware and electricity costs, do not exceed $100,000 in 2025 USD. Hardware costs correspond to the total market value of the hardware at time of use. The cost of experimental runs before the final runs used to train the AGI will not be included.
The hardware is commercially available to individual purchasers.
The training can either be from scratch or just the fine-tuning of an open source base model. In the latter case, the base model must uncontroversially not meet the given definition of AGI and the fine-tuning must be done by a distinct, separate actor.
Fine Print
The training must not have utilized cloud compute.
Hardware modifications such as overclocking and custom cooling are allowed, as is training on multiple devices, so long as the total cost stays below the limit.
Specialized hardware and alternative computing regimes such as quantum, photonic, or analog computing can qualify, so long as they are commercially available and meet the other criteria.
Human resources costs don't count as technical costs.
If humanity is disempowered or rendered extinct by AI before the criteria are fulfilled, this question will be annulled. AI-caused extinction or disempowerment that is a result of the criteria being fulfilled does not prevent a Yes resolution.
If an advanced AI system is created which acts (or is used) to prevent the creation of AGI by individuals and small actors regardless of hardware limits (e.g. through an extreme degree of global power concentration, surveillance, and control, or by destroying all capable hardware -- i.e. a "Pivotal Act"), this question will be annulled.
This question is ultimately about the (in)viability of AI governance mechanisms that rely on supply chain governance. When in doubt, the question should be interpreted in that context.
I like this market! I just wished you had held out a bit longer until the Metaculus criteria are finalized. Whether a base model needs to be open source or not seems cruxial for me. I'll put in a small bet and would appreciate an @traders once the criteria are fixed.
@Primer I did technically create this while the Metaculus question was in review, but I knew the criteria were final (based on my conversation with the moderator). The Metaculus question is currently set to open tomorrow with the same language. Let me know if you do spot any discrepancies!
To clarify the specific point, the newly created AGI model doesn't have restrictions on whether it is released as open source/public weights. But if it is created by fine-tuning from a non-AGI base model, that base model would have to be open source/public weights (so that the general public has unrestricted access to it). Fine-tuning through an API is not considered, since that activity falls under the control of the model provider.
@Haiku Thanks, laying out the reasoning for excluding finetuning through an API cleared up my understanding.
Just to clarify: I only brought this up, because I consider this a very important question / market with exemplarily clear resolution criteria (for Manifold's standard), which I hope will attract many predictors / traders.