The former OpenAI chief scientist Ilya Sutskever has just launched his new AI company, Safe Superintelligence Inc. (SSI).
Ilya Sutskever, a co-founder of OpenAI, has established a new corporation, Safe Superintelligence Inc. (SSI), less than one month after departing the organization.
Sutskever, who served as the chief scientist of OpenAI for an extended period, established SSI in collaboration with Daniel Gross, a former YC partner, and Daniel Levy, an ex-OpenAI engineer.
Sutskever was a pivotal contributor to OpenAI’s endeavors to enhance AI safety in response to the proliferation of “superintelligent” AI systems, a project he collaborated on with Jan Leike. However, in May, both Sutskever and Leike abruptly departed the company due to disagreements with OpenAI’s leadership regarding the approach to AI safety. Leike currently serves as the anthropic team leader.
Sutskever has been addressing the more challenging aspects of AI safety for an extended period. He (writing with Leike) predicted in a blog post published in 2023 that AI with intelligence surpassing that of humans could come within a decade. If this occurs, it will not necessarily be benevolent, necessitating research into methods to control and restrict it.
“SSI is our exclusive focus, name, and entire product roadmap.” Our business model, investors, and team all agree on the intent to achieve SSI. The tweet states, “We address safety and capabilities as technical challenges that require the application of revolutionary engineering and scientific innovations.”
“We intend to expedite the development of capabilities while prioritizing our safety.” In this manner, we can progress without incident. Our business model ensures that safety, security, and progress are protected from short-term commercial pressures, and our singular focus means that there is no distraction from management bureaucracy or product cycles.
SSI is recruiting technical talent for its Palo Alto and Tel Aviv offices.