fbpx
Former OpenAI Co-founder Announces New Company Focused on Safe Development of ‘Superintelligence’ Former OpenAI Co-founder Announces New Company Focused on Safe Development of ‘Superintelligence’
Ilya Sutskever, one of the founders of OpenAI, has announced the creation of a new company, Safe Superintelligence Inc., aimed at... Former OpenAI Co-founder Announces New Company Focused on Safe Development of ‘Superintelligence’

Ilya Sutskever, one of the founders of OpenAI, has announced the creation of a new company, Safe Superintelligence Inc., aimed at safely developing advanced artificial intelligence. This move follows his departure from OpenAI last month. Sutskever revealed his plans in a social media post on Wednesday, detailing the company’s mission and vision.

Co-founded by Sutskever along with Daniel Gross and Daniel Levy, is singularly focused on the safe development of “superintelligence” — AI systems that surpass human intelligence. The company’s primary goal is to prioritize safety and security in AI development, free from the distractions of traditional business pressures.

Our only goal is safely developing superintelligence,” Sutskever and his co-founders stated. They emphasized that their business model is designed to shield their work from “short-term commercial pressures,” allowing them to focus exclusively on safety and security without the interference of “management overhead or product cycles.

The new venture is based in Palo Alto, California, and Tel Aviv, leveraging the founders’ deep connections in these regions to attract top technical talent. “We have deep roots and the ability to recruit top technical talent,”.

Sutskever’s departure from OpenAI comes after a tumultuous period within the company. Last year, he was part of an unsuccessful attempt to remove CEO Sam Altman. The attempted ouster led to significant internal conflict, with debates over whether OpenAI’s leadership was prioritizing commercial interests over AI safety.

During his tenure at OpenAI, Sutskever co-led a team dedicated to developing artificial general intelligence safely. Upon leaving OpenAI, he hinted at plans for a “very personally meaningful” project, which has now been revealed as Safe Superintelligence Inc.

Sutskever’s exit was followed by the resignation of his team co-leader, Jan Leike, who criticized OpenAI for allowing safety to “take a backseat to shiny products.” In response, OpenAI announced the formation of a safety and security committee, although it has predominantly been staffed with company insiders.

The establishment of Safe Superintelligence Inc. underscores the growing concerns within the AI community about the safe development of superintelligent systems. By creating a company solely dedicated to this cause, Sutskever and his co-founders aim to address these concerns head-on, free from the immediate pressures and distractions that often accompany commercial ventures.

ODSC Team

ODSC Team

ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.

1