Resolution criteria
Resolves YES if, by 23:59:59 UTC on December 31, 2026, any single consumer LLM assistant platform (e.g., ChatGPT, Google Gemini, Anthropic Claude, Meta AI, Character.AI, Perplexity, Poe) has credible, public evidence that 10% of its daily active users spend ≥120 minutes per day in that assistant, measured over a period of at least one month FOR NON-WORK-RELATED TASKS
Acceptable evidence: an official company disclosure; or a reputable third‑party measurement firm (e.g., Similarweb, data.ai/Sensor Tower, Comscore/Nielsen, Ofcom, Pew) publishing a platform‑level time‑spent distribution showing the 10% share. Aggregated stats across multiple assistants or across broad suites (e.g., Microsoft 365/Windows-wide “Copilot time”) do not count; the metric must be for a single assistant product. If multiple platforms meet the threshold, resolves YES upon the first qualifying publication. If no qualifying evidence is published by the deadline, resolves NO. Examples of acceptable source types: platform or regulator/blog posts with explicit time‑spent distributions, or analytics firm reports with methodology notes. (similarweb.com)
Background
Heavy daily use exists in some cohorts: a 2024–2025 St. Louis Fed survey finds that among workers who used generative AI in the prior month, about a third reported ≥1 hour per workday using it (not platform‑specific). (stlouisfed.org)
Engagement varies widely by assistant. For Character.AI, historical reporting and analytics have cited unusually high time spent (often approaching or exceeding an hour per day on average), far above typical chatbot session times; recent reporting still places average use around 80 minutes/day. (similarweb.com)
Mainstream assistants like ChatGPT and Gemini generally show short average web session durations in third‑party data, implying that reaching the “≥2 hours/day for 10% of users” threshold would likely come from a subset of heavy users on a specific platform. (explodingtopics.com)
Considerations
Measurement pitfalls: many web metrics report “time per visit,” not per‑user daily time, and may miss mobile app or voice usage; only sources reporting per‑user daily time distributions will count. (digitalinformationworld.com)
Survey data must be platform‑specific (e.g., “on Character.AI/Gemini, x% of DAU spend ≥120 min/day”); general “AI usage” surveys (across multiple tools) will not qualify. (stlouisfed.org)
Some platforms (e.g., Character.AI) now surface time‑use summaries to users/parents; if the company publishes aggregate distribution stats from these dashboards, that would qualify. (theverge.com)