Connect with us

Technology

Ilya Sutskever launches safe superintelligence AI startup after leaving OpenAI 

Published

on

Ilya Sutskever

Ilya Sutskever, the co-founder of OpenAI, has announced the launch of a new artificial intelligence startup called Safe Superintelligence (SSI). Taking to the microblogging platform X (formerly known as Twitter), Sutskever shared details about his new venture. 

“Today, I’m excited to announce that I am starting a new company,” Sutskever wrote on X. “We will pursue safe superintelligence directly, with one focus, one goal, and one product.” 

This move comes a month after Sutskever departed from OpenAI, where he held the position of Chief Scientist and co-led the Superalignment team with Jan Leike. Leike also left OpenAI in May to join the rival AI firm, Anthropic. Following their departures, OpenAI’s Superalignment team, which was dedicated to guiding and controlling AI systems, was disbanded. 

Sutskever is founding SSI with Daniel Gross, who previously led Apple’s AI and search efforts, and Daniel Levy, also formerly of OpenAI. 

In a post on X, SSI explained their mission: “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.” 

The company aims to advance AI capabilities rapidly while ensuring that safety measures always remain a step ahead. “This way, we can scale in peace,” the company noted. 

SSI emphasized that their singular focus eliminates distractions from management overhead or product cycles, and their business model ensures that safety, security, and progress are not compromised by short-term commercial pressures. 

It’s also worth noting that Sutskever was among the OpenAI board members who attempted to remove CEO Sam Altman from the company last November.