Technology

California waters down bill aimed at preventing AI disasters before final vote, taking advice from Anthropic

Published

on

California’s AI disaster prevention bill, SB 1047, has faced significant opposition from many sides in Silicon Valley. Today, California lawmakers bowed to that pressure, adding several amendments proposed by AI firm Anthropic and other opponents.

The bill was passed by the California Appropriations Commission on Thursday, a serious step toward becoming law with several key changes, Sen. Wiener’s office told TechCrunch.

“We accepted a number of very sensible proposed amendments, and I believe we addressed the fundamental concerns expressed by Anthropic and many others in the industry,” Sen. Wiener said in an announcement to TechCrunch. “These amendments build on the significant changes to SB 1047 that I made earlier to address the unique needs of the open source community, which is an important source of innovation.”

SB 1047 still goals to stop large AI systems from killing many individuals or causing cybersecurity incidents that cost greater than $500 million by holding developers accountable. However, the bill now gives the California government less authority to carry AI labs accountable.

What does SB 1047 do now?

Most importantly, the bill not allows the California attorney general to sue AI firms for security failures before a catastrophic event occurs. This was a suggestion from Anthropic.

Instead, the California attorney general can seek an injunction requiring the corporate to stop specific activities it deems dangerous, and may also sue the AI ​​creator if its model actually causes a catastrophic event.

In addition, SB 1047 not creates the Frontier Model Division (FMD), a brand new government agency that was previously included within the bill. However, the bill still creates the Board of Frontier Models — the core of FMD — and places it inside the prevailing Government Operations Agency. In fact, the board is now larger, with nine members as a substitute of 5. The Board of Frontier Models will proceed to set calculation thresholds for covered models, issue safety guidelines, and issue regulations for auditors.

Senator Wiener also amended SB 1047 in order that AI labs would not be required to certify their security test results “under penalty of perjury.” Now, these AI labs are simply required to file public “statements” describing their security practices, however the bill not imposes any criminal liability.

SB 1047 also now includes more lenient language on how developers ensure AI models are secure. The bill now requires developers to make use of “reasonable care” and that AI models don’t pose a big risk of causing a catastrophe, somewhat than the “reasonable assurance” the bill previously required.

In addition, lawmakers added protections for open source models which have been tuned. If someone spends lower than $10 million to tune a covered model, they usually are not explicitly considered a developer under SB 1047. The responsibility will still rest with the unique, larger developer of the model.

Why all these changes now?

While the bill has faced significant opposition from U.S. lawmakers, outstanding AI researchers, Big Tech, and enterprise capitalists, the bill passed the California Legislature relatively easily. These amendments will likely appease opponents of SB 1047 and present Governor Newsom with a less controversial bill that he can sign without losing support from the AI ​​industry.

While Newsom has not commented publicly on SB 1047, previously indicated his commitment to AI innovation in California.

Anthropic tells TechCrunch it’s reviewing changes to SB 1047 before taking a position. Not all of Anthropic’s proposed amendments were accepted by Senator Wiener.

“The goal of SB 1047 is — and always has been — to make AI safer while enabling innovation across the ecosystem,” said Nathan Calvin, senior policy adviser at the Center for AI Safety Action Fund. “The new amendments will support that goal.”

Still, these changes are unlikely to appease SB 1047’s staunchest critics. While the bill is noticeably weaker than it was before these amendments, SB 1047 still holds developers accountable for the risks related to their AI models. This fundamental fact about SB 1047 just isn’t universally supported, and these amendments do little to deal with it.

“These editions are just decoration,” said Martin Casado, general partner of Andreessen Horowitz, in tweet“They do not address the real issues or criticisms of the bill.”

In fact, moments after SB 1047 passed on Thursday, eight members of the United States Congress representing California wrote letter asking Governor Newsom to veto SB 1047. They write that the bill “would not be good for our state, for the startup community, for scientific advancement, or even for protecting against possible harms associated with AI development.”

What’s next?

SB 1047 now heads to the California Assembly floor for a final vote. If it passes there, it can should be sent back to the California Senate for a vote on these final amendments. If it passes on each counts, it can go to Governor Newsom’s desk, where it could possibly be vetoed or signed into law.

This article was originally published on : techcrunch.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version