Connect with us

Technology

California waters down bill aimed at preventing AI disasters before final vote, taking advice from Anthropic

Published

on

California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic

California’s AI disaster prevention bill, SB 1047, has faced significant opposition from many sides in Silicon Valley. Today, California lawmakers bowed to that pressure, adding several amendments proposed by AI firm Anthropic and other opponents.

The bill was passed by the California Appropriations Commission on Thursday, a serious step toward becoming law with several key changes, Sen. Wiener’s office told TechCrunch.

“We accepted a number of very sensible proposed amendments, and I believe we addressed the fundamental concerns expressed by Anthropic and many others in the industry,” Sen. Wiener said in an announcement to TechCrunch. “These amendments build on the significant changes to SB 1047 that I made earlier to address the unique needs of the open source community, which is an important source of innovation.”

SB 1047 still goals to stop large AI systems from killing many individuals or causing cybersecurity incidents that cost greater than $500 million by holding developers accountable. However, the bill now gives the California government less authority to carry AI labs accountable.

What does SB 1047 do now?

Most importantly, the bill not allows the California attorney general to sue AI firms for security failures before a catastrophic event occurs. This was a suggestion from Anthropic.

Instead, the California attorney general can seek an injunction requiring the corporate to stop specific activities it deems dangerous, and may also sue the AI ​​creator if its model actually causes a catastrophic event.

In addition, SB 1047 not creates the Frontier Model Division (FMD), a brand new government agency that was previously included within the bill. However, the bill still creates the Board of Frontier Models — the core of FMD — and places it inside the prevailing Government Operations Agency. In fact, the board is now larger, with nine members as a substitute of 5. The Board of Frontier Models will proceed to set calculation thresholds for covered models, issue safety guidelines, and issue regulations for auditors.

Senator Wiener also amended SB 1047 in order that AI labs would not be required to certify their security test results “under penalty of perjury.” Now, these AI labs are simply required to file public “statements” describing their security practices, however the bill not imposes any criminal liability.

SB 1047 also now includes more lenient language on how developers ensure AI models are secure. The bill now requires developers to make use of “reasonable care” and that AI models don’t pose a big risk of causing a catastrophe, somewhat than the “reasonable assurance” the bill previously required.

In addition, lawmakers added protections for open source models which have been tuned. If someone spends lower than $10 million to tune a covered model, they usually are not explicitly considered a developer under SB 1047. The responsibility will still rest with the unique, larger developer of the model.

Why all these changes now?

While the bill has faced significant opposition from U.S. lawmakers, outstanding AI researchers, Big Tech, and enterprise capitalists, the bill passed the California Legislature relatively easily. These amendments will likely appease opponents of SB 1047 and present Governor Newsom with a less controversial bill that he can sign without losing support from the AI ​​industry.

While Newsom has not commented publicly on SB 1047, previously indicated his commitment to AI innovation in California.

Anthropic tells TechCrunch it’s reviewing changes to SB 1047 before taking a position. Not all of Anthropic’s proposed amendments were accepted by Senator Wiener.

“The goal of SB 1047 is — and always has been — to make AI safer while enabling innovation across the ecosystem,” said Nathan Calvin, senior policy adviser at the Center for AI Safety Action Fund. “The new amendments will support that goal.”

Still, these changes are unlikely to appease SB 1047’s staunchest critics. While the bill is noticeably weaker than it was before these amendments, SB 1047 still holds developers accountable for the risks related to their AI models. This fundamental fact about SB 1047 just isn’t universally supported, and these amendments do little to deal with it.

“These editions are just decoration,” said Martin Casado, general partner of Andreessen Horowitz, in tweet“They do not address the real issues or criticisms of the bill.”

In fact, moments after SB 1047 passed on Thursday, eight members of the United States Congress representing California wrote letter asking Governor Newsom to veto SB 1047. They write that the bill “would not be good for our state, for the startup community, for scientific advancement, or even for protecting against possible harms associated with AI development.”

What’s next?

SB 1047 now heads to the California Assembly floor for a final vote. If it passes there, it can should be sent back to the California Senate for a vote on these final amendments. If it passes on each counts, it can go to Governor Newsom’s desk, where it could possibly be vetoed or signed into law.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Introducing the Next Wave of Startup Battlefield Judges at TechCrunch Disrupt 2024

Published

on

By

Announcing our next wave of Startup Battlefield judges at TechCrunch Disrupt 2024

Startup Battlefield 200 is the highlight of every Disrupt, and we will’t wait to search out out which of the 1000’s of startups which have invited us to collaborate can have the probability to pitch to top enterprise capitalists at TechCrunch Disrupt 2024. Join us at Moscone West in San Francisco October 28–30 for an epic showdown where everyone can have the probability to make a major impact.

Get insight into what the judges are in search of in a profitable company as they supply detailed feedback on the evaluation criteria. Don’t miss the opportunity to learn from their expert insights and discover the key characteristics that result in startup success, only at Disrupt 2024.

We’re excited to introduce our next group of investors who will evaluate startups and dive into each pitch in an in-depth and insightful Q&A session. Stay tuned for more big names coming soon!

Alice Brooks, Partner, Khosla Ventures

Alicja is a partner in Khosla’s ventures interests in sustainability, food, agriculture, and manufacturing/supply chain. She has worked with multiple startups in robotics, IoT, retail, consumer goods, and STEM education, and led mechanical, electrical, and application development teams in the US and Asia. She also founded and managed manufacturing operations in factories in China and Taiwan. Prior to KV, Alice was the founder and CEO of Roominate, a STEM education company that helps girls learn engineering concepts through play.

Mark Crane, Partner, General Catalyst

Mark Crane is a partner at General Catalysta enterprise capital firm that works with founders from seed to endurance to assist them construct corporations that may stand the test of time. Focused on acquiring and investing in later-stage investment opportunities equivalent to AuthZed, Bugcrowd, Resilience, and TravelPerk. Prior to joining General Catalyst, Mark was a vice chairman at Cove Hill Partners in Massachusetts. Prior to that, he was a senior associate at JMI Equity and an associate at North Bridge Growth Equity.

Sofia Dolfe, Partner, Index Ventures

Sofia partners with founders who use their unique perspective and private understanding of the problem to construct corporations that drive behavioral change, powerful network effects, and transform entire industries, from grocery and e-commerce to financial services and healthcare. Sofia can also be one of Index projects‘ gaming leads, working with some of the best gaming corporations in Europe, making a recent generation of iconic gaming titles. He spends most of his time in the Nordics, but works with entrepreneurs across the continent.

Christine Esserman, Partner, Accel

Christine Esserman joined Acceleration in 2017 and focuses on software, web, and mobile technology corporations. Since joining Accel, Christine has helped lead Accel’s investments in Blackpoint Cyber, Linear, Merge, ThreeFlow, Bumble, Remote, Dovetail, Ethos, Guru, and Headway. Prior to joining Accel, Christine worked in product and operations roles at multiple startups. A native of the Bay Area, Christine graduated from the Wharton School at the University of Pennsylvania with a level in Finance and Operations.

Haomiao Huang, Founding Partner, Matter Venture Partners

Haomiao from Venture Matter Partners is a robotics researcher turned founder turned investor. He is especially obsessed with corporations that bring digital innovation to physical economy enterprises, with a give attention to sectors equivalent to logistics, manufacturing and transportation, and advanced technologies equivalent to robotics and AI. Haomiao spent 4 years investing in hard tech with Wen Hsieh at Kleiner Perkins. He previously founded smart home security startup Kuna, built autonomous cars at Caltech and, as part of his PhD research at Stanford, pioneered the aerodynamics and control of multi-rotor unmanned aerial vehicles. Kuna was part of the Y Combinator Winter 14 cohort.

Don’t miss it!

The Startup Battlefield winner, who will walk away with a $100,000 money prize, can be announced at Disrupt 2024—the epicenter of startups. Join 10,000 attendees to witness this breakthrough moment and see the next wave of tech innovation.

Register here and secure your spot to witness this epic battle of startups.

This article was originally published on : techcrunch.com
Continue Reading

Technology

India Considers Easing Market Share Caps for UPI Payments Operators

Published

on

By

phonepe UPI being used to accept payments at a road-side sunglasses stall.

The regulator that oversees India’s popular UPI rail payments is considering relaxing a proposed market share cap for operators like Google Pay, PhonePe and Paytm because it grapples with enforcing the restrictions, two people accustomed to the matter told TechCrunch.

The National Payments Corporation of India (NPCI), which is regulated by the Indian central bank, is considering increasing the market share that UPI operators can hold to greater than 40%, said two of the people, requesting anonymity because the knowledge is confidential. The regulator had earlier proposed a 30% market share limit to encourage competition within the space.

UPI has change into the most well-liked option to send and receive money in India, with the mechanism processing over 12 billion transactions monthly. Walmart-backed PhonePe has about 48% market share by volume and 50% by value, while Google Pay has 37.3% share by volume.

Once an industry heavyweight, Paytm’s market share has fallen to 7.2% from 11% late last yr amid regulatory challenges.

According to several industry executives, the NPCI’s increase in market share limits is more likely to be a controversial move as many UPI providers were counting on regulatory motion to curb the dominance of PhonePe and Google Pay.

NPCI, which has previously declined to comment on market share, didn’t reply to a request for comment on Thursday.

The regulator originally planned to implement the market share caps in January 2021 but prolonged the deadline to January 1, 2025. The regulator has struggled to seek out a workable option to implement its proposed market share caps.

The stakes are high, especially for PhonePe, India’s Most worthy fintech startup, valued at $12 billion.

Sameer Nigam, co-founder and CEO of PhonePe, said last month that the startup cannot go public “if there is uncertainty on regulatory issues.”

“If you buy a share at Rs 100 and value it assuming we have 48-49% market share, there is uncertainty whether it will come down to 30% and when,” Nigam told a fintech conference last month. “We are reaching out to them (the regulator) whether they can find another way to at least address any concerns they have or tell us what the list of concerns is,” he added.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Bluesky addresses trust and security issues related to abuse, spam and more

Published

on

By

Bluesky butterfly logo and Jay Graber

Social media startup Bluesky, which is constructing a decentralized alternative to X (formerly Twitter), provided an update Wednesday on the way it’s approaching various trust and security issues on its platform. The company is in various stages of developing and piloting a variety of initiatives focused on coping with bad actors, harassment, spam, fake accounts, video security and more.

To address malicious users or those that harass others, Bluesky says it’s developing recent tools that can have the option to detect when multiple recent accounts are created and managed by the identical person. This could help curb harassment when a foul actor creates several different personas to attack their victims.

Another recent experiment will help detect “rude” replies and forward them to server moderators. Like Mastodon, Bluesky will support a network where self-hosters and other developers can run their very own servers that connect to Bluesky’s server and others on the network. This federation capability is still in early access. But in the long term, server moderators will have the option to resolve how they need to take care of individuals who post rude responses. In the meantime, Bluesky will eventually reduce the visibility of those responses on its app. Repeated rude labels on content will even lead to account-level labels and suspensions, it says.

To curb using lists to harass others, Bluesky will remove individual users from the list in the event that they block the list creator. Similar functionality was recently introduced to Starter Packs, a sort of shared list that will help recent users find people to follow on the platform (check TechCrunch Starter Pack).

Bluesky will even scan lists with offensive names or descriptions to limit the potential of harassing others by adding them to a public list with a toxic or offensive name or description. Those who violate Bluesky’s Community Guidelines might be hidden from the app until the list owner makes changes that align with Bluesky’s policies. Users who proceed to create offensive lists will even face further motion, though the corporate didn’t provide details, adding that the lists are still an area of ​​energetic discussion and development.

In the approaching months, Bluesky also intends to move to handling moderation reports through its app, using notifications relatively than counting on email reports.

To combat spam and other fake accounts, Bluesky is launching a pilot that can attempt to routinely detect when an account is fake, scamming or sending spam to users. Combined with moderation, the goal is to have the option to take motion on accounts inside “seconds of receiving a report,” the corporate said.

One of the more interesting developments is how Bluesky will comply with local laws while still allowing free speech. It will use geotags that allow it to hide some content from users in a particular area to comply with the law.

“This allows Bluesky’s moderation service to maintain flexibility in creating spaces for free expression while also ensuring legal compliance so that Bluesky can continue to operate as a service in these geographic regions,” the corporate shared in a blog post. “This feature will be rolled out on a country-by-country basis, and we will endeavor to inform users of the source of legal requests when legally possible.”

To address potential trust and safety issues with videos which have recently been added, the team is adding features like the flexibility to disable autoplay, ensuring videos are labeled, and providing the flexibility to report videos. They are still evaluating what else might need to be added, which might be prioritized based on user feedback.

When it comes to abuse, the corporate says its general framework is “a question of how often something happens versus how harmful it is.” The company focuses on addressing high-impact, high-frequency issues, in addition to “tracking edge cases that could result in significant harm to a few users.” The latter, while only affecting a small number of individuals, causes enough “ongoing harm” that Bluesky will take motion to prevent abuse, it says.

User concerns will be reported via reports, emails and mentions @safety.bsky.app account.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending