2. Details of SSI’s research and technology will leak. The big labs will make meaningful adjustments [See full title!]
3
1kṀ311
2026
37%
chance
  • All these predictions are taken from Forbes/Rob Toews' "10 AI Predictions For 2026".

  • For the 2025 predictions you can find them here, and their resolution here.

  • You can find all the markets under the tag [2026 Forbes AI predictions].

  • Note that I will resolve to whatever Forbes/Rob Toews say in their resolution article for 2026's predictions, even if I or others disagree with his decision.

  • I might bet in this market, as I have no power over the resolution.

    Description of this prediction from the article:

    Full title: Details of SSI's research and technology will leak to the public. The big labs will make meaningful adjustments to their research roadmaps as a result.

No technology company in the world is more shrouded in mystery than Ilya Sutskever's Safe Superintelligence.

Sutskever was OpenAI's cofounder and chief scientist until his dramatic falling-out with OpenAI CEO Sam Altman in late 2023. He is widely regarded as one of the greatest researchers in the field of modern AI.

In summer 2024, Sutskever launched SSI. In his public statements, Sutskever has been clear that he believes the research direction the large incumbent labs are pursuing are destined to plateau and are not the best path to building superintelligence. He says he and SSI are working on something entirely new.

But nobody outside of SSI has any idea what this new paradigm is. It is remarkable how effective SSI has been at keeping its research plans completely under wraps.

This secrecy cannot last forever. In 2026, details of SSI's approach will finally leak to the public. It will be a novel and promising enough research agenda that it will prompt the big labs — including OpenAI, Anthropic and Google DeepMind — to recalibrate their own research roadmaps and to invest more heavily in this direction.

What could Sutskever and SSI’s big idea possibly be?

Two obvious answers would be recursive self-improvement (AI systems that can build stronger AI systems, that can build stronger AI systems and so forth) or continual learning (AI systems that can learn on an ongoing basis as they interact with the world).

Both of these fields address fundamental shortcomings of today’s AI systems, and both have become buzzy frontier research areas in recent months.

But we speculate that it’s something less consensus and more “out there” than these. We can’t wait to find out.

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy