Why Ilya Sutskever Left OpenAI to Build a Safety-First AI Lab
Ilya Sutskever spent nearly a decade at OpenAI arguing that safety research and capability research must advance together, but he ultimately concluded that vision required a completely new company. In May 2024, he departed OpenAI after almost ten years. One month later, he launched Safe Superintelligence Inc. (SSI) with co-founders Daniel Gross and Daniel Levy, backed by over $3 billion in funding and a $32 billion valuation by March 2025, despite having no public product yet.
What Changed Sutskever's Mind About OpenAI's Approach?
Sutskever's departure marked a turning point in his thinking about how to build safe superintelligent systems. During his time at OpenAI, he advocated internally for running safety and capability research in parallel. However, he eventually concluded that OpenAI's structure, with its capped-profit model and product timelines, could not accommodate his vision for pure safety-focused research. In a deposition related to Elon Musk's lawsuit against OpenAI, Sutskever explained his reasoning.
"Ultimately, I had a big new vision. And it felt more suitable for a new company," stated Sutskever.
Ilya Sutskever, Founder of Safe Superintelligence Inc.
The timing of his exit was significant. Just months before his departure, Sutskever had participated in the November 2023 board vote to remove Sam Altman as CEO. That decision backfired spectacularly; over 700 OpenAI employees threatened to resign unless Altman returned. Altman was reinstated within four days, and Sutskever's governance role effectively ended. He remained at the company for six more months before announcing his departure.
How Does SSI's Structure Differ From Traditional AI Companies?
SSI was built on a single principle: focus exclusively on safe superintelligence without commercial pressure. When Sutskever announced the company's launch in June 2024, he articulated this philosophy clearly.
"We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product," announced Sutskever.
Ilya Sutskever, Founder of Safe Superintelligence Inc.
This commitment to a single mandate shaped every operational decision. SSI launched with fewer than 20 researchers and deliberately avoided setting a product release date. The company rejected product diversification and commercial timelines that might distract from safety research. This stands in contrast to Anthropic, where co-founder Dario Amodei chose a different path: building safety into model training within a traditional company structure.
By November 2025, Sutskever had refined his diagnosis of why this structure matters. In a conversation with podcast host Dwarkesh Patel, he explained that the AI field has entered a new phase.
"We are squarely an age of research company," explained Sutskever.
Ilya Sutskever, Founder of Safe Superintelligence Inc.
What Does Sutskever Believe About the Future of AI Training?
Sutskever's most recent public statements reveal a fundamental shift in how he views AI development. At NeurIPS 2024 in December, he delivered a keynote address arguing that the era of large-scale pre-training is ending. Pre-training refers to the initial phase where AI models learn from vast amounts of text data from the internet. Sutskever's claim was stark.
"Pre-training as we know it will unquestionably end," stated Sutskever.
Ilya Sutskever, Founder of Safe Superintelligence Inc.
His reasoning is straightforward: there is only one internet, and training data from it has been largely consumed. This physical constraint means that simply scaling up compute power and data, the traditional path to AI improvement, will no longer work. The timing of this keynote was sharp. OpenAI and Google had just announced synthetic data and test-time compute as replacement scaling methods, suggesting the industry was already grappling with the same problem.
This diagnosis has profound implications for how AI companies should organize themselves. If capital scaling is no longer the bottleneck, then research becomes the limiting factor. SSI's entire structure is built on this bet.
How to Understand Sutskever's Vision for AI Safety
- Research-First Organization: SSI prioritizes pure research over product development, with no commercial timeline attached to safety work, allowing researchers to focus entirely on solving superintelligence safety challenges.
- Single Mandate Focus: The company operates with one goal and one product, rejecting product diversification that could dilute attention from the core safety mission.
- Structural Independence: Sutskever concluded that safety-focused AI research requires a separate company structure free from profit-driven product deadlines that characterize traditional AI labs.
Sutskever's broader philosophy extends beyond organizational structure. Over the years, he has made provocative claims about AI consciousness and the future of human-AI integration. In February 2022, he posted on social media that large neural networks might be slightly conscious, a claim that drew criticism from prominent AI researchers like Yann LeCun and cognitive scientist Stanislas Dehaene. By 2023, Anthropic had launched model welfare research exploring similar questions, suggesting Sutskever's ideas had influenced the broader field.
In October 2023, Sutskever discussed the possibility of human-AI merger in an MIT Technology Review interview. He framed it as a technology adoption curve: something that seems radical today but will become normal within a generation. This perspective reveals how Sutskever thinks about the long-term trajectory of AI development, not just near-term safety concerns.
The journey from OpenAI to SSI represents more than a career move. It reflects a fundamental conviction that building safe superintelligent systems requires organizational structures fundamentally different from those that built today's large language models. Whether SSI's approach succeeds will likely shape how the AI industry thinks about safety research for years to come.