Silicon Valley tech is agitated over a California measure that requires AI developers to set up safety protocols to prevent “critical harms” against humanity.
The AI law will “burden startups because of its arbitrary and shifting thresholds,” according to cryptocurrency VC firm a16z.
The “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (SB 1047) of California would mandate that AI developers put safety measures in place to stop catastrophic cyberattacks or mass fatalities.
Democratic lawmakers in California put out the idea in February.
Along with requiring an “emergency stop” button for AI models, the proposed laws also call for yearly third-party audits of AI safety procedures, establishing a new Frontier Model Division (FMD) to oversee compliance and harsh penalties for noncompliance.
On August 13, US Congressman Ro Khanna released a statement opposing SB 1047, citing concerns that “the bill as currently written would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California’s spirit of innovation.” Nevertheless, there has been opposition from Congress.
“to protect workers and address potential risks including misinformation, deepfakes, and an increase in wealth disparity,” Silicon Valley representative Khanna admitted the necessity for AI legislation.
Silicon Valley venture capital firms, including Andreessen Horowitz, have opposed the law, claiming it will burden entrepreneurs and impede innovation.
One of the bill’s authors, Senator Scott Wiener, received a letter from a16z Chief Legal Officer Jaikumar Ramaswamy on August 2, in which he claimed the law will “burden startups because of its arbitrary and shifting thresholds.”
Prominent industry experts like Andrew Ng and Fei-Fei Li have also voiced opposition, arguing that it will negatively impact open-source development and the AI ecosystem.
The computer scientist Li informed Fortune on August 6:
“If passed into law, SB-1047 will harm our budding AI ecosystem, especially the parts of it that are already at a disadvantage to today’s tech giants: the public sector, academia, and ‘little tech.’”
Big Tech firms contend that excessive regulation of AI will stifle free expression and may drive innovation in the tech sector out of California.
In a post on X in June, Yann LeCun, the top AI scientist at Meta, claimed that “regulating R&D would have apocalyptic consequences on the AI ecosystem,” indicating that the legislation would hinder research efforts.
After receiving bipartisan approval in the Senate, the bill moves to the Assembly, which must be approved by August 31.