
This market resolves YES if at least one of the two following conditions holds:
smarter-than-almost-all-humans (or smarter) AGI does not get exposed for public use (no smarter-than-almost-all-humans general capabilities usable by the public, e.g. those that could significantly further science without much guidance/hand-holding from humans or do other things typically only available to the smartest humans or beyond) in the first 8 years after market creation
the dominant limitations of how smarter-than-almost-all-humans AGI can be used are technical rather than legal safeguards, for example if the AI refuses unsafe interactions, and those are sufficient to gate unsafe public use. Legal requirements for technical safeguards which actually lead to technical safeguards count as technical rather than legal.
To summarize: the public is prevented access to smarter-than-almost-all-humans-or-better AGI capabilities in the first place or technical safeguards provide the dominant public restrictions and are able to prevent public misuse.
If the relevant AI is gated behind legal consensus mechanisms for its use and is not exposed directly to the public [YES]. Stopping progress on the relevant AIs by e.g. international treaty, such that they are not exposed to the public, constitutes technical limitations to safe exposure [YES]. Legal or financial liability is the dominant source of restrictions, such that the capabilities are publicly available [NO]. Significant human intelligence enhancement (via modification, not learning) occurs in the next 8 years such that the base condition moves [NO]. There is a major AI-related catastrophe caused by misuse by people other than the system's developers [NO]. edit: Such systems are not created at all for any reason [YES].
Superhuman speed of execution of a task will not count toward the smartness condition; assume the comparison human gets a month to perform the task without access to the comparison AI or anything at its level. Assume public use that breaks commonly used encryption or leads to extensive harassment or abuse of humans triggers the major catastrophe condition.
This is a complicated market condition, so assume minor changes may occur to bring the condition closer to the spirit of the market: will the public get access to unsafe task capabilities almost all humans lack? (the model could still be at that capability level and available to the public)