As of now people are debating about existential risk due to misalignment, technological unemployment, lack of security in critical applications, fairness/equity/inclusion issues among others. Will something completely different and very important be generally considered the main risk of AI in 2030? Resolves on Dec 31, 2030 based on the consensus of researchers in 2030.
@StevenK A relatively obscure precursor to most ideas is relatively easy to find in hindsight. Let’s draw the line at topics that have at least half as many preprints on arxiv or equivalent as of now as the least popular of the examples I listed above.
@mariopasquato I mean something simpler than the destruction of life on Earth, just, for example, the creation of new deadly viruses or prions.