
Is the nature of AI risk completely misunderstood today with respect to the state of the art in 2030?
25
1kṀ19692030
30%
chance
1H
6H
1D
1W
1M
ALL
As of now people are debating about existential risk due to misalignment, technological unemployment, lack of security in critical applications, fairness/equity/inclusion issues among others. Will something completely different and very important be generally considered the main risk of AI in 2030? Resolves on Dec 31, 2030 based on the consensus of researchers in 2030.
Update 2024-25-12 (PST): - Lack of security in critical applications includes risks such as AI-enabled bioterrorism. (AI summary of creator comment)
This question is managed and resolved by Manifold.
Get
1,000 to start trading!
People are also trading
Related questions
In 2050, will the general consensus among experts be that the concern over AI risk in the 2020s was justified?
78% chance
Will humanity wipe out AI x-risk before 2030?
10% chance
If humanity survives to 2100, what will experts believe was the correct level of AI risk for us to assess in 2023?
38% chance
Will AI wipe out AI before 2030?
9% chance
Will AI be considered safe in 2030? (resolves to poll)
72% chance
Will there be a massive catastrophe caused by AI before 2030?
29% chance
Will AI wipe out AI before the year 2030?
4% chance
Will humanity wipe out AI before the year 2030?
11% chance
Will humanity wipe out AI before the year 2030?
7% chance
Will humans wipe out AI by 2030?
6% chance