MANIFOLD
4. Discourse about AGI and superintelligence will become less fashionable and less common.
14
Ṁ1kṀ2.2k
Dec 31
63%
chance
  • All these predictions are taken from Forbes/Rob Toews' "10 AI Predictions For 2026".

  • For the 2025 predictions you can find them here, and their resolution here.

  • You can find all the markets under the tag [2026 Forbes AI predictions].

  • Note that I will resolve to whatever Forbes/Rob Toews say in their resolution article for 2026's predictions, even if I or others disagree with his decision.

  • I might bet in this market, as I have no power over the resolution.

    Coming into 2025, expectations about AI’s trajectory and the timeline to artificial general intelligence were sky-high. The discourse was breathless.

    “Systems that start to point to AGI are coming into view,” wrote OpenAI CEO Sam Altman in February. “The economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases and can fully realize our creative potential.”


    Or, as Anthropic CEO Dario Amodei put it around the same time: “What I’ve seen inside Anthropic and out over the last few months has led me to believe that we’re on track for human-level AI systems that surpass humans in every task within 2-3 years.”

    Essays like "Situational Awareness" and “AI 2027,” which painted mind-blowing pictures of how the world would be unrecognizably transformed by superintelligent AI over the next two to three years, made the rounds and summed up the zeitgeist.

    Over the course of 2025, gradually but unmistakably, this has changed.

    Highly anticipated models like GPT-5 achieved only incremental improvements over their predecessors. Agents, while showing great promise, continue to lack the capabilities and reliability to cross the chasm to widespread adoption. Foundation models in areas like robotics and biology are not yet ready for primetime.


    Amusingly, even the AI 2027 authors themselves have begun to walk back their own claims from that piece.


    The vibe is shifting. Across the AI ecosystem, a consensus is emerging that superintelligent AI is likely not around the corner — and more to the point, that it may not matter that much. This technology is already extremely powerful. Well before the arrival of AGI, trillions of dollars of value creation are up for grabs as AI reshapes every industry and organization.


    In 2026, this vibe shift will translate to noticeably less discourse about and interest in the concepts of AGI and superintelligence. It’s not that people will challenge or reject these concepts outright; they will just be less focused on them. AI leaders like Sam Altman, Dario Amodei, Sundar Pichai and Satya Nadella will spend less time talking about superintelligent AI and more time talking about enterprise AI adoption. Commentators and thought leaders will choose to opine on more proximate topics, from the geopolitics of AI to AI-driven job displacement. Go-to discussion topics at cocktail parties and around the watercooler will shift in a similar direction.Over the past few months, Ilya Sutskever and Andrej Karpathy — two of the most influential and respected figures in AI — each went on Dwarkesh Patel’s podcast and shared relatively sober views on the timeline to AGI, with Sutskever estimating it would take between five to 20 years to achieve AGI and Karpathy predicting 10 years. Their views both reflected and shaped the broader community’s perspectives.

Market context
Get
Ṁ1,000
to start trading!
Sort by:

resolution criteria?

@nothing_ever_happens his personal judgment.

© Manifold Markets, Inc.TermsPrivacy