• bitcoinBitcoin$104,630.81-2.68%
  • ethereumEthereum$3,876.92-2.52%
  • rippleXRP$2.47-6.33%
  • solanaSolana$216.56-4.85%
  • binancecoinBNB$716.62-1.51%

OpenAI Set Team to Tame ‘Superintelligent’ AI, Then Quit

Superintelligent OpenAI

OpenAI pledged 20% of its resources to the Superalignment team for “superintelligent” AI, but their work stalled due to denied compute requests.

Several team members resigned last week due to this and other issues; among them was co-lead Jan Leike, a former DeepMind researcher who worked at OpenAI and contributed to the creation of GPT-4, ChatGPT, and InstructGPT.

On Friday morning, Leike announced his resignation in public. In a series of posts on X, Leike stated, “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time until we finally reached a breaking point.” “I think a lot more of our time and resources should go into preparing for the upcoming model generations—addressing issues with security, monitoring, readiness, adversarial robustness, (super)alignment, secrecy, societal impact, and other relevant concerns. These are challenging issues to solve, and I worry we may not be headed in that direction.

According to Jan Leike on X, he said:

Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done. Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.

OpenAI did not quickly answer a request for comment regarding the resources pledged and assigned to that team.

Leike and Ilya Sutskever, co-founders of OpenAI, spearheaded the Superalignment team, which the business established in July of last year. Sutskever also announced his resignation this week. Its audacious objective was to resolve the fundamental technical obstacles to superintelligent AI control during the following four years. The team comprised researchers from various organizations within OpenAI and scientists and engineers from the company’s previous alignment division. Their goals were to share and solicit work from the more significant AI industry through programs like a research grant program and contribute research on the safety of both internal and external models.

The Superalignment team successfully published a large amount of safety research and provided funding totaling millions of dollars to independent researchers. However, the Superalignment team found itself having to fight for more upfront investments as the company’s stated mission of developing superintelligent AI for the benefit of all humanity began to take up an increasing amount of the bandwidth of OpenAI leadership due to product launches.

Leike went on, “Creating machines that are more intelligent than humans is an intrinsically risky undertaking. However, shiny items have displaced safety culture and procedures in recent years.

Sutskever’s conflict with Sam Altman, the CEO of OpenAI, was a significant additional distraction.

Due to Sutskever’s and OpenAI’s former board of directors’ worries that Altman hadn’t been “consistently candid” with the board’s members, Altman was unceremoniously fired late last year. Many of the company’s staff members and investors, including Microsoft, pressured Altman to be restored, the board essentially quit, and Sutskever supposedly never returned to work at OpenAI.

The source claims that Sutskever played a crucial role in the Superalignment team’s success by bridging gaps with other OpenAI departments and providing research contributions. Additionally, he is an ambassador, persuading important OpenAI decision-makers of the value of the team’s work.

John Schulman, another co-founder of OpenAI, took over as head of the Superalignment team after Leike and Sutskever left. However, the team will no longer be a dedicated unit; instead, it will consist of a loosely affiliated group of researchers embedded in various divisions within the company. An OpenAI representative says it involves “integrating [the team] more deeply.”

OpenAI’s AI development may need to be more safety-focused, as it could have been.

Previous Article

US Tightens Crypto Regs, Boosts AI for Enforcement

Next Article

Slack Criticized for Deceptive AI Training Policy