Ilya Sutskever, the former chief scientist and co-founder of OpenAI, has announced the establishment of a new AI company
The primary objective of the new company is to develop a “safe superintelligence.”
Daniel Levy, a former member of OpenAI, and Daniel Gross, a former Apple AI director, are also co-founders of Safe Superintelligence Inc., as stated in the June 19 announcement.
The “most significant technical challenge of our era” is the “ensured safety” of superintelligence, which is “within reach,” according to the firm.
The company’s primary objective is to establish a “straight-shot safe superintelligence (SSI) lab” that exclusively produces technology and prioritizes safety. In addition, it included:
“We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.”
Safe Superintelligence Inc. stated that its objective is to expedite the development of capabilities while simultaneously prioritizing safety. Management, overhead, short-term commercial pressures, and product cycles will not detract from the firm’s objective due to its focused approach.
“This way, we can scale in peace.”
The company also stated that investors agree with prioritizing secure development over all other considerations.
In a Bloomberg interview, Sutskever declined to disclose the names of financial sponsors or the total amount raised thus far. Conversely, Gross expressed confidence that “raising capital will not be a challenge” for the company.
Safe Superintelligence Inc. will be headquartered in Palo Alto, California, and maintain offices in Tel Aviv, Israel.
Following safety concerns at OpenAI, the launch occurred.
A dispute at OpenAI led to the establishment of Safe Superintelligence. In November 2023, Sutskever was a group member that attempted to remove OpenAI CEO Sam Altman from his position.
Safety was a concern at the company during the dispute, as indicated by early reporting, including The Atlantic’s. In the interim, an internal company memo indicated that Altman’s attempted termination was associated with a communication failure between him and the firm’s board of directors.
Sutskever withdrew from the public glare for several months following the incident and formally departed Open AI in May, just a few weeks ago. The issue of AI safety has been brought to the forefront by recent developments at the AI firm, even though he did not provide any specific reasons for his departure.
Jan Leike and Gretchen Krueger, employees of OpenAI, recently departed the company due to their apprehensions regarding the safety of AI. In the interim, Vox reports that at least five additional “safety-conscious employees” have departed since November.
Sutskever stated in an interview with Bloomberg that he has a cordial relationship with Altman and that OpenAI is aware of the new company “in general terms.”
Hackers have infiltrated an online course that was established by Andrew Tate, a self-described misogynist and purported influencer The compromise…
Apple is reportedly working on an enhanced version of Siri, incorporating large language models (LLMs) to create a more conversational…
Thursday was the day that YouTube announced that its Dream Screen feature for Shorts now allows users to construct movie…
Marissa Mayer proposes a business model for AI chatbots funded by advertising, highlighting potential opportunities for monetization Marissa Mayer possesses…
By exploiting two new zero-day vulnerabilities discovered in widely used software developed by cybersecurity behemoth Palo Alto Networks, malicious hackers…
Wiz, a cybersecurity company that has garnered significant attention, is acquiring a substantial amount of cloud security technology in order…