Resolution criteria
Resolves YES if, by market close, there exists at least one peer‑reviewed human‑subjects study in an academic journal showing a statistically significant positive association between use of generative AI chatbots/companions (e.g., ChatGPT, Character.AI, Replika) and higher mental‑health symptoms or diagnoses (e.g., depression via PHQ‑9/HADS/DASS‑21; anxiety via GAD‑7/HADS/DASS‑21), compared with lower use. Cross‑sectional or longitudinal designs qualify; correlation is sufficient (no causality required). Preprints, news articles, and non–peer‑reviewed reports do not count. Retractions nullify that study.
Resolves NO if no such peer‑reviewed association exists at market close.
Example qualifying sources (for verification):
Frontiers in Public Health (2025): college students’ depression positively associated with use of conversational AI for companionship. (frontiersin.org)
International Journal of Mental Health and Addiction (2025): problematic ChatGPT use positively correlated with psychological distress (DASS‑21). (link.springer.com)
Background
Evidence is mixed: several RCTs and meta‑analyses find short‑term chatbot interventions can reduce depression/anxiety symptoms, while observational work links heavier or “problematic” AI‑chatbot use to worse mental‑health indicators. (pubmed.ncbi.nlm.nih.gov)
The field is new and growing rapidly, with 2024–2025 reviews noting limited real‑world safety data and heterogeneous measures. (cambridge.org)
Considerations
“Correlation” here includes studies where higher symptoms co‑occur with greater AI‑chatbot use (directionality may be reverse or bidirectional). (frontiersin.org)
Results often come from convenience samples (e.g., students or single‑country cohorts), which may limit generalizability. (frontiersin.org)
Intervention trials showing symptom reductions do not negate a YES outcome; the market hinges on whether any qualifying positive association exists by close. (pubmed.ncbi.nlm.nih.gov)
Yes but mostly because people really want such a study to exist. Also because if you have e.g. someone's mental health declining due to other circumstances like a divorce or job loss they're going to probably be more likely to talk to GPT for emotional support or advice, and get into long protracted conversations with it because they can't get the level of emotional support they want elsewhere else.