Connect with us

Technology

California waters down bill aimed at preventing AI disasters before final vote, taking advice from Anthropic

Published

on

California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic

California’s AI disaster prevention bill, SB 1047, has faced significant opposition from many sides in Silicon Valley. Today, California lawmakers bowed to that pressure, adding several amendments proposed by AI firm Anthropic and other opponents.

The bill was passed by the California Appropriations Commission on Thursday, a serious step toward becoming law with several key changes, Sen. Wiener’s office told TechCrunch.

“We accepted a number of very sensible proposed amendments, and I believe we addressed the fundamental concerns expressed by Anthropic and many others in the industry,” Sen. Wiener said in an announcement to TechCrunch. “These amendments build on the significant changes to SB 1047 that I made earlier to address the unique needs of the open source community, which is an important source of innovation.”

Advertisement

SB 1047 still goals to stop large AI systems from killing many individuals or causing cybersecurity incidents that cost greater than $500 million by holding developers accountable. However, the bill now gives the California government less authority to carry AI labs accountable.

What does SB 1047 do now?

Most importantly, the bill not allows the California attorney general to sue AI firms for security failures before a catastrophic event occurs. This was a suggestion from Anthropic.

Instead, the California attorney general can seek an injunction requiring the corporate to stop specific activities it deems dangerous, and may also sue the AI ​​creator if its model actually causes a catastrophic event.

In addition, SB 1047 not creates the Frontier Model Division (FMD), a brand new government agency that was previously included within the bill. However, the bill still creates the Board of Frontier Models — the core of FMD — and places it inside the prevailing Government Operations Agency. In fact, the board is now larger, with nine members as a substitute of 5. The Board of Frontier Models will proceed to set calculation thresholds for covered models, issue safety guidelines, and issue regulations for auditors.

Advertisement

Senator Wiener also amended SB 1047 in order that AI labs would not be required to certify their security test results “under penalty of perjury.” Now, these AI labs are simply required to file public “statements” describing their security practices, however the bill not imposes any criminal liability.

SB 1047 also now includes more lenient language on how developers ensure AI models are secure. The bill now requires developers to make use of “reasonable care” and that AI models don’t pose a big risk of causing a catastrophe, somewhat than the “reasonable assurance” the bill previously required.

In addition, lawmakers added protections for open source models which have been tuned. If someone spends lower than $10 million to tune a covered model, they usually are not explicitly considered a developer under SB 1047. The responsibility will still rest with the unique, larger developer of the model.

Why all these changes now?

While the bill has faced significant opposition from U.S. lawmakers, outstanding AI researchers, Big Tech, and enterprise capitalists, the bill passed the California Legislature relatively easily. These amendments will likely appease opponents of SB 1047 and present Governor Newsom with a less controversial bill that he can sign without losing support from the AI ​​industry.

Advertisement

While Newsom has not commented publicly on SB 1047, previously indicated his commitment to AI innovation in California.

Anthropic tells TechCrunch it’s reviewing changes to SB 1047 before taking a position. Not all of Anthropic’s proposed amendments were accepted by Senator Wiener.

“The goal of SB 1047 is — and always has been — to make AI safer while enabling innovation across the ecosystem,” said Nathan Calvin, senior policy adviser at the Center for AI Safety Action Fund. “The new amendments will support that goal.”

Still, these changes are unlikely to appease SB 1047’s staunchest critics. While the bill is noticeably weaker than it was before these amendments, SB 1047 still holds developers accountable for the risks related to their AI models. This fundamental fact about SB 1047 just isn’t universally supported, and these amendments do little to deal with it.

Advertisement

“These editions are just decoration,” said Martin Casado, general partner of Andreessen Horowitz, in tweet“They do not address the real issues or criticisms of the bill.”

In fact, moments after SB 1047 passed on Thursday, eight members of the United States Congress representing California wrote letter asking Governor Newsom to veto SB 1047. They write that the bill “would not be good for our state, for the startup community, for scientific advancement, or even for protecting against possible harms associated with AI development.”

What’s next?

SB 1047 now heads to the California Assembly floor for a final vote. If it passes there, it can should be sent back to the California Senate for a vote on these final amendments. If it passes on each counts, it can go to Governor Newsom’s desk, where it could possibly be vetoed or signed into law.

Advertisement
This article was originally published on : techcrunch.com

Technology

PO clarous Director General Zoom also uses AI avatar during a quarterly connection

Published

on

By

Zoom CEO Eric Yuan

General directors at the moment are so immersed in artificial intelligence that they send their avatars to cope with quarterly connections from earnings as a substitute, a minimum of partly.

After AI Avatar CEO CEO appeared on the investor’s conversation firstly of this week, the final director of Zoom Eric Yuan also followed them, also Using his avatar for preliminary comments. Yuan implemented his non -standard avatar via Zoom Clips, an asynchronous company video tool.

“I am proud that I am one of the first general directors who used the avatar in a call for earnings,” he said – or fairly his avatar. “This is just one example of how Zoom shifts the limits of communication and cooperation. At the same time, we know that trust and security are necessary. We take seriously the content generated AI and build strong security to prevent improper use, protect the user’s identity and ensure that avatars are used responsibly.”

Advertisement

Yuan has long been in favor of using avatars at meetings and previously said that the corporate goals to create Digital user twins. He just isn’t alone on this vision; The CEO of transcript -powered AI, apparently, trains its own avatar Share the load.

Meanwhile, Zoom said he was doing it Avatar non -standard function available To all users this week.

(Tagstranslat) meetings AI

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

The next large Openai plant will not be worn: Report

Published

on

By

Sam Altman speaks onstage during The New York Times Dealbook Summit 2024.

Opeli pushed generative artificial intelligence into public consciousness. Now it might probably develop a very different variety of AI device.

According to WSJ reportThe general director of Opeli, Altman himself, told employees on Wednesday that one other large product of the corporate would not be worn. Instead, it will be compact, without the screen of the device, fully aware of the user’s environment. Small enough to sit down on the desk or slot in your pocket, Altman described it each as a “third device” next to MacBook Pro and iPhone, in addition to “Comrade AI” integrated with on a regular basis life.

The preview took place after the OpenAI announced that he was purchased by IO, a startup founded last 12 months by the previous Apple Joni Ive designer, in a capital agreement value $ 6.5 billion. I will take a key creative and design role at Openai.

Advertisement

Altman reportedly told employees that the acquisition can ultimately add 1 trillion USD to the corporate conveyorsWearing devices or glasses that got other outfits.

Altman reportedly also emphasized to the staff that the key would be crucial to stop the copying of competitors before starting. As it seems, the recording of his comments leaked to the journal, asking questions on how much he can trust his team and the way rather more he will be able to reveal.

(Tagstotransate) devices

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

The latest model AI Google Gemma can work on phones

Published

on

By

It grows “open” AI Google, Gemma, grows.

While Google I/O 2025 On Tuesday, Google removed Gemma 3N compresses, a model designed for “liquid” on phones, laptops and tablets. According to Google, available in a preview starting on Tuesday, Gemma 3N can support sound, text, paintings and flicks.

Models efficient enough to operate in offline mode and without the necessity to calculate within the cloud have gained popularity within the AI ​​community lately. They will not be only cheaper to make use of than large models, but they keep privacy, eliminating the necessity to send data to a distant data center.

Advertisement

During the speech to I/O product manager, Gemma Gus Martins said that GEMMA 3N can work on devices with lower than 2 GB of RAM. “Gemma 3N shares the same architecture as Gemini Nano, and is also designed for incredible performance,” he added.

In addition to Gemma 3N, Google releases Medgemma through the AI ​​developer foundation program. According to Medgemma, it’s essentially the most talented model to research text and health -related images.

“Medgemma (IS) OUR (…) A collection of open models to understand the text and multimodal image (health),” said Martins. “Medgemma works great in various imaging and text applications, thanks to which developers (…) could adapt the models to their own health applications.”

Also on the horizon there may be SignGEMMA, an open model for signaling sign language right into a spoken language. Google claims that Signgemma will allow programmers to create recent applications and integration for users of deaf and hard.

Advertisement

“SIGNGEMMA is a new family of models trained to translate sign language into a spoken text, but preferably in the American sign and English,” said Martins. “This is the most talented model of understanding sign language in history and we are looking forward to you-programmers, deaf and hard communities-to take this base and build with it.”

It is value noting that Gemma has been criticized for non -standard, non -standard license conditions, which in accordance with some developers adopted models with a dangerous proposal. However, this didn’t discourage programmers from downloading Gemma models tens of tens of millions of times.

.

(Tagstransate) gemma

Advertisement
This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending