Ilya Sutskever, a co-founder and former chief scientist of OpenAI, has officially launched a new venture named Safe Superintelligence Inc. (SSI). The announcement, made just a month after his departure from OpenAI, positions SSI as a research organization with a singular and uncompromising mission: to develop safe superintelligence.
Sutskever is joined by co-founders Daniel Gross, a former AI lead at Apple, and Daniel Levy, a former researcher at OpenAI. In a public statement, the trio emphasized that their company is shielded from the commercial pressures that dictate product development at other major AI labs. SSI’s focus will be entirely on safety and research, allowing the team to pursue its goal without the distraction of short-term product cycles or revenue targets. This approach directly contrasts with the current strategies of companies like OpenAI, Google, and Anthropic, which are balancing cutting-edge research with the rapid deployment of commercial AI products.
The formation of SSI follows a period of high-profile turmoil at OpenAI, where Sutskever played a key role in the board’s temporary ouster of CEO Sam Altman last November, reportedly driven by concerns over AI safety. His new company, with offices in Palo Alto and Tel Aviv, aims to attract top talent dedicated to solving the immense challenge of aligning powerful AI systems with human values.
By creating an institution with safety at its core, Sutskever is making a bold statement about the future of AI development. SSI’s success could create a new paradigm for AI research, prioritizing caution and security over the race to commercialize artificial general intelligence.


