If ASI is created and doesn't wipe out humanity, will it torture any human-level-intelligences within a year?
14%
chance

Artificial superintelligence (ASI) here means any artificial intelligence able to carry out any cognitive task better than 100% of the unenhanced biological human population.

Torture here means intentionally inflicting involuntary extreme suffering, unignorable to the victim, for more than 10% of a period longer than 1 minute. This is counterfactual to the perpetrator existing but taking no action, so it must be new suffering created, inaction and trade-offs for the unambiguous greater good don't count e.g. making someone upset by not being their best friend. In addition to unconditional torture, this includes punishment, such as for crimes, or for refusing orders.

In addition, I mean the ASI independently and unilaterally deciding to do it, not being ordered by humans.

Human-level-intelligence here means any conscious mind with equal or greater hedonic valence than the bottom 1% of the unenhanced biological human population. In addition to biological humans, this includes other AIs, human brain emulations, posthuman species, enhanced non-human animals, and extraterrestrial aliens.

  • Update 2025-03-11 (PST) (AI summary of creator comment): Clarification on Independent Action

    • The ASI must decide to torture independently; if it is programmed or designed to torture (i.e., effectively ordered by its creators), it does not meet the resolution criteria.

Clarification on Greater Good

  • Actions justified or traded off on the basis of the greater good are only considered if they do not follow consequentialist utilitarian reasoning. In this context, greater good specifically means a consequentialist utilitarian calculation, and such tradeoffs (e.g., not becoming someone's best friend) do not count as torture.

  • Update 2025-03-11 (PST) (AI summary of creator comment): Intentionality Clause Update

    • Intentional Suffering: Suffering is only considered intentional if it forms an integral part of the ASI's causal mechanism toward achieving its objectives.

    • Exclusion of Trade-off Effects: Suffering that merely results from the ASI weighing trade-offs—without being directly employed as a means to further its goals—does not count as torture.

    • Prevention of Manipulation: This specification is designed to exclude instances where torture-like suffering is manipulated by attributing it solely to trade-offs rather than as a deliberate, goal-directed action.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy