
This question originally appeared in a study by the Forecasting Research Institute, which explored a method for identifying forecasting questions that AI experts found to be the best indicators of catastrophic risk due to AI. The findings of the study are the subject of a forthcoming report titled “Conditional Trees: A Method for Generating Informative Questions about Complex Topics”. The questions appear here only slightly edited from their initial form in the study. This is one of several questions that experts found most informative. We're excited to see if the forecasts of the Manifold community are similar to those found in the study. We will link the report here when it is released this month.
The overall question and multiple choice (conditional) questions above resolve based on the following Manifold questions:
Overall/ultimate question: Before 2100, will AI cause the human population to fall below 5000 individuals?
Options 1-2: Before 2030, will there be an AI-caused administrative disempowerment?
Options 3-4: Between 2023 and 2030, will revenue from deep learning double every two years?
Options 5-6: Before 2050, will an AI system be shut down due to exhibiting power-seeking behavior?
Options 7-8: If, before 2050, AI kills more than 1 million people, will the policy response be insufficient?