Will technical limitations or safeguards significantly restrict public access to smarter-than-almost-all-humans AGI?
Basic
12
1.4k
2032
45%
chance

This market resolves YES if at least one of the two following conditions holds:

  • smarter-than-almost-all-humans (or smarter) AGI does not get exposed for public use (no smarter-than-almost-all-humans general capabilities usable by the public, e.g. those that could significantly further science without much guidance/hand-holding from humans or do other things typically only available to the smartest humans or beyond) in the first 8 years after market creation

  • the dominant limitations of how smarter-than-almost-all-humans AGI can be used are technical rather than legal safeguards, for example if the AI refuses unsafe interactions, and those are sufficient to gate unsafe public use. Legal requirements for technical safeguards which actually lead to technical safeguards count as technical rather than legal.

To summarize: the public is prevented access to smarter-than-almost-all-humans-or-better AGI capabilities in the first place or technical safeguards provide the dominant public restrictions and are able to prevent public misuse.

If the relevant AI is gated behind legal consensus mechanisms for its use and is not exposed directly to the public [YES]. Stopping progress on the relevant AIs by e.g. international treaty, such that they are not exposed to the public, constitutes technical limitations to safe exposure [YES]. Legal or financial liability is the dominant source of restrictions, such that the capabilities are publicly available [NO]. Significant human intelligence enhancement (via modification, not learning) occurs in the next 8 years such that the base condition moves [NO]. There is a major AI-related catastrophe caused by misuse by people other than the system's developers [NO]. edit: Such systems are not created at all for any reason [YES].

Superhuman speed of execution of a task will not count toward the smartness condition; assume the comparison human gets a month to perform the task without access to the comparison AI or anything at its level. Assume public use that breaks commonly used encryption or leads to extensive harassment or abuse of humans triggers the major catastrophe condition.

This is a complicated market condition, so assume minor changes may occur to bring the condition closer to the spirit of the market: will the public get access to unsafe task capabilities almost all humans lack? (the model could still be at that capability level and available to the public)

Get Ṁ600 play money
Sort by:

I hold this at a low probability, less than 20%.

  • AGI may put most power in the hands of those with the most capital to deploy AGI agents.

  • These are mostly going to be large companies, governments and other responsible actors.

  • The use of AI agents for defensive purposes will reduce the pressure to prevent ordinary people from using them in smaller capacities.

  • There may be an open source effort to release AGI agents from a major lab (e.g. FAIR) or from a nation state playing catch up (e.g. China)

  • Open source AGI, even if made less technically capable through safety training, will be fine tuned by a less centralized open source player back into a respectable AGI.

  • Even if initially there are technical limitations from an initial deployment by the first few labs, 8 years is a very long time for others to create their own AGI without such restrictions.

@CampbellHutcheson That sounds plausible. Successful defense would likely demand global compute monitoring by an advanced agent if such capabilities were made publicly available. Conditioned on no intervening catastrophe, 8 years could be enough time to get adequate defenses in place to allow safe public access (50%), and very likely enough time to get to the level of capabilities (in a full push for them) that would require such defenses if they were offered to the public (95%). Conditioned on it not being a heavily legally restricted path (and no intervening catastrophe), the open source push for those capabilities seems likely to get there in 8 years (85%).

If the defenses fell short of ubiquitous monitoring with reactive solutions to misaligned/misused agents, I would consider use of such capabilities by the public to pose an unacceptable risk. For this reason I consider it plausible that there will be a coordinated effort to ensure the public doesn't get access in the first place, though I'm pretty uncalibrated to what I expect for this market. 8 years is a significant window for a catastrophe to occur even if plausibly long enough to develop the appropriate safeguards.