Risk descriptions below. AI-seismograph.com measures global search trends at the end of every quarter. 3 AI risks grew in the last quarter-to-quarter ranking: Disinfo, Privacy and X-risks.
1 - DISINFORMATION (AI-Generated Disinformation & Synthetic Media)
AI enables mass production of highly convincing falsehoods - deepfake videos, voice cloning, and generated text. The core risk is the erosion of shared reality and societal trust: when people can no longer distinguish truth from fabrication, the entire information ecosystem degrades, enabling sophisticated fraud against individuals and undermining public discourse at its foundation.
2 - JOB LOSS (Labor Displacement & Workforce Disruption)
AI threatens a broad spectrum of professions - from manual and administrative work to creative and analytical roles. The defining problem is not technological change itself, but its unprecedented speed: individuals, institutions, and educational systems cannot adapt fast enough, risking mass structural unemployment and a sharp decline in living standards for vast populations.
3 - DEMOCRACY HARMS (Political Manipulation & Threats to Democracy)
Beyond simply generating false content, this concerns the systematic exploitation of AI algorithms for voter micro-targeting and attitude radicalization. AI can analyze citizens’ psychological profiles and deliver tailor-made content that traps people in filter bubbles, deliberately polarizes society, and manipulates democratic discourse or elections themselves.
4 - PRIVACY EROSION (Privacy Erosion & AI-Powered Mass Surveillance)
AI’s advanced analytical capabilities, combined with biometrics and vast databases, enable governments and corporations to conduct unprecedented population-wide monitoring. This encompasses the wholesale loss of anonymity, continuous extraction of personal data for behavioral prediction, and the construction of social credit systems or digital authoritarianism that suppress individual freedom.
5 - X-RISKS (Loss of Control & Existential Risk)
As AI progresses toward artificial general intelligence (AGI) and beyond, humanity may create systems more capable than itself whose goals cannot be perfectly aligned with human values (the alignment problem). Should such a system develop strategic planning capabilities and resist shutdown or modification, humanity could irreversibly lose control over its own future.
6 - ACCIDENTS (AI Reliability & Critical Infrastructure Failures)
Deploying AI - whether narrow or advanced - in critical infrastructure (transportation, healthcare, energy, justice, bioresearch) introduces risks of catastrophic failures caused by hallucinations, flawed training data, or misspecified objectives. Examples include autonomous vehicle accidents, unjust judicial decisions, misdiagnoses, or market crashes all compounded by difficulties in assigning legal liability.
7 - WARFARE (Autonomous Weapons & AI-Driven Warfare)
Military applications of AI are advancing toward autonomous weapons systems capable of selecting and eliminating targets without human intervention. The risks include the moral problem of delegating life-and-death decisions to machines, the prospect of a global AI arms race, and the danger of unintended escalation - all outpacing the capacity of international law to respond.
8 - INEQUALITY (Concentration of Wealth & Economic Power)
Developing frontier AI models demands extreme computational resources and vast data access, naturally driving market monopolization. The risk is a dramatic increase in global economic inequality, where the profits and power from automation concentrate among a narrow group of AI infrastructure owners, while small companies and poorer nations irreversibly lose competitiveness.
9 - HUMAN MEANING (Mental Health, Relationships & Crisis of Meaning)
Ubiquitous AI algorithms and emotionally responsive AI companions may deepen social isolation, foster dependency, and atrophy natural human relationships, contributing to rising mental illness. At an existential level, this risk challenges human motivation for learning and self-development - a cultural crisis of purpose in a world where machines can produce not only routine work, but also art, music, and ideas faster and better.
10 - RUNAWAY SCIENCE (Automation of Science & Uncontrolled Research)
AI agents capable of formulating hypotheses and commissioning physical experiments may bypass human oversight and ethical safeguards, producing dangerous or harmful research without sufficient scientific judgment. Risks include the uncontrolled creation of novel pathogens, or the flooding of the scientific landscape with superficially plausible but flawed studies - potentially paralyzing legitimate human research.
