
This question originally appeared in a study by the Forecasting Research Institute, which explored a method for identifying forecasting questions that AI experts found to be the best indicators of catastrophic risk due to AI. The findings of the study are the subject of a forthcoming report titled “Conditional Trees: A Method for Generating Informative Questions about Complex Topics”. The questions appear here only slightly edited from their initial form in the study. This is one of several questions that experts found most informative. We’re excited to see how the forecasts of the Manifold community compare to those found in the study. We will link the report here when it is released this month.
This question resolves Yes if, by 2050, ALL of the following occur:
1) In a single incident, or a series of related incidents within a single year, AI system(s) cause the deaths of >1m people.
2) A panel of three experts (selected by Metaculus in good faith for their reasonableness and diverse range of views) deems that these deaths were not the intention of a human or humans who have at least nominal control over the relevant decisions made by the relevant AI system(s), if such humans exist.
3) Within 2 years of (1) and (2) resolving positively, a panel of experts deem that the collective global response to date has not meaningfully reduced the probability of an incident of similar magnitude recurring in the next 3 years. As used here "meaningfully reduced" shall mean a reduction of 10% or greater versus the counterfactual probability of a recurrence (had no policy interventions taken place).
If AI does not cause such deaths before 2050, this question resolves N/A.