In January 2026, how publicly salient will AI deepfakes/media be, vs AI labor impact, vs AI catastrophic risks?
➕
Plus
4
Ṁ270
2026
21%
Deepfakes > Labor > Catastrophic
30%
Labor > Deepfakes > Catastrophic
10%
Catastrophic > Labor > Deepfakes
10%
Catastrophic > Deepfakes > Labor
18%
Labor > Catastrophic > Deepfakes
11%
Deepfakes > Catastrophic > Labor

In January 2026, I'll consult polls, Google Trends, social media trends, ratios of media coverage by topic, and other objective sources, as well as subjective "vibes" from laypeople I encounter, to order the following categories of risk associated with AI by their salience to the general American public:

1. AI Media / Deepfakes - This includes concerns associated with synthetic media, including such issues as fraud, harms to human social connection, addictive synthetic entertainment (above and beyond harms from social media in general), propaganda and manipulation, etc.
2. Catastrophic / Existential Risks from AI - This includes harms such as catastrophic misuse of AI systems, risks associated with an AI race, Great Power conflict spurred by AI (but not harms from lethal autonomous weapons in general), human extinction due to AI, etc.
3. Labor Risks - Labor displacement, instability, and/or fewer economic opportunities for general people due to AI. This can include the prospect of losing meaning without work, as well as harms associated with individuals and/or institutions being forced to defer to AIs for crucial tasks. I'll also slot injustice associated with stolen human creativity / copyright considerations into this category.

This market only assesses the relative salience of these risks, meaning I will not consider the extent to which the public is generally more optimistic than pessimistic about AI. Instead, I'll simply assess how people appear to prioritize their attention between these topics when asked to only consider AI risks rather than benefits.

Since I am considering public salience, I will not attempt to evaluate or consider whether that salience is justified or proportional, nor whether harms are existing, short-term, or long-term. For example, if deepfakes receive the most attention in peoples' perception of AI risks, I will rate it first even if damaging deepfakes are not particularly common in practice. As another example, Catastrophic risks might be ranked first in salience even if most people think the worst harms are long-term (like climate change).

If some of the comparisons are hard to judge, I may resolve multiple orderings to nonzero values. For example, if I think the ordering is Labor > Deepfakes = Catastrophic, I might resolve 50% to each of Labor > Deepfakes > Catastrophic and Labor > Catastrophic > Deepfakes.

To give a sense of how I assess things currently: if I had to resolve to an ordering as of December 14, 2024, I think I'd choose Labor > Deepfakes > Catastrophic. I'll report or discuss how I'd order the issues throughout 2025 in the comments when asked.

Get
Ṁ1,000
and
S3.00
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules