Connect with us

Technology

Microsoft and A16Z are putting aside their differences and joining hands in protest against artificial intelligence regulations

Published

on

Image of a computer, phone and clock on a desk tied in red tape.

The two biggest forces in two deeply intertwined tech ecosystems – large incumbents and startups – have taken a break from counting money and together they demand this from the federal government to stop even considering regulations that might affect their financial interests or, as they prefer to call it, innovation.

“Our two companies may not agree on everything, but it’s not about our differences,” writes this group with very different perspectives and interests: A16Z founders, partners Marc Andreessen and Ben Horowitz, and Microsoft CEO Satya Nadella and president/director legal affairs Brad Kowal. A very cross-sectional gathering, representing each big business and big money.

But they are supposedly taking care of little boys. That is, all the businesses that may be impacted by this latest try to abuse the regulations: SB 1047.

Advertisement

Imagine being charged a fee for improperly disclosing an open model! A16Z General Partner Anjney Midha he called it a “regressive tax” on startups and a “blatant regulatory capture” by Big Tech firms that, unlike Midha and his impoverished colleagues, could afford the lawyers needed to comply with the regulations.

Except that was all disinformation spread by Andreessen Horowitz and other wealthy interests who actually stood to suffer as supporters of billion-dollar enterprises. In fact, small models and startups would only be barely affected since the proposed law specifically protected them.

It’s strange that the identical form of targeted carve-out for “Little Tech” that Horowitz and Andreessen routinely advocate for was distorted and minimized by the lobbying campaign they and others waged against SB 1047. (In an interview with the bill’s sponsor , California State Senator Scott Wiener talked about this whole thing recently on Disrupt.)

This bill had its problems, but its opposition greatly exaggerated compliance costs and didn’t significantly substantiate claims that it will chill or burden startups.

Advertisement

It’s a part of a longtime pattern in which Big Tech – to which, despite their stance, Andreessen and Horowitz are closely related – operates on the state level, where it could possibly win (as with SB 1047), while asking for federal solutions that it knows will won’t ever come, or which can have no teeth because of partisan bickering and congressional ineptitude on technical issues.

This joint statement of “political opportunity” is the second a part of the sport: After torpedoing SB 1047, they will say they did it solely to support federal policy. Never mind that we’re still waiting for a federal privacy law that tech firms have been pushing for a decade while fighting state laws.

What policies do they support? “A different responsible market approach”, in other words: down with our money, Uncle Sam.

Regulations needs to be based on a “science-based and standards-based approach, recognizing regulatory frameworks that focus on the use and misuse of technology” and should “focus on the risk of bad actors exploiting artificial intelligence.” This signifies that we must always not introduce proactive regulation, but quite reactive penalties when criminals use unregulated products for criminal purposes. This approach has worked great in this whole FTX situation, so I understand why they support it.

Advertisement

“The regulation should only be implemented if the benefits outweigh the costs.” It would take 1000’s of words to clarify all of the ways this idea expressed in this context is funny. But they are principally suggesting that the fox needs to be included on the henhouse planning committee.

Regulators should “allow developers and startups the flexibility to choose AI models to use wherever they build solutions, and not tilt the playing field in favor of any one platform.” This suggests that there may be some agenda requiring permission to make use of one model or one other. Since this is just not the case, it’s a straw man.

Here is a lengthy quote that I have to quote in full:

The right to education: Copyright goals to advertise the progress of science and the applied arts by extending protection to publishers and authors to encourage them to make recent works and knowledge available to the general public, but not on the expense of society’s right to learn from those works. Copyright law mustn’t be co-opted to suggest that machines needs to be prevented from using data – the premise of artificial intelligence – to learn in the identical way as humans. Unprotected knowledge and facts, whether or not contained in protected subject material, should remain free and accessible.

To be clear, the clear statement here is that software operated by billion-dollar corporations has the “right” to access any data since it should give you the chance to learn from it “in the same way as humans.”

Advertisement

First of all, no. These systems are not like people; they generate data in their training data that mimics human activity. These are complex statistical projection programs with a natural language interface. They haven’t any more “right” to any document or fact than Excel.

Second, the concept that “facts” – by which they mean “intellectual property” – are the one thing these systems are interested in, and that some type of fact-gathering cabal is working to forestall them, is an artificial narrative we have seen before. Perplexity made the “facts belong to everyone” argument in its public response to a lawsuit alleging systematic content theft, and its CEO Aravind Srinivas repeated that mistake to me on stage at Disrupt, as in the event that they were being sued for knowing tidbits just like the Earth’s distance from the Moon.

While this is just not the place to totally discuss this particular straw man argument, let me simply indicate that while facts are indeed free agents, there are real costs to how they are created – say, through original reporting and scientific research. This is why copyright and patent systems exist: not to forestall the wide sharing and use of mental property, but to encourage its creation by ensuring that it could possibly be assigned real value.

Copyright law is much from perfect and is more likely to be abused as often as used. However, this is just not “co-opted to suggest that machines should be prevented from using data” – it’s used to be sure that bad actors don’t bypass the worth systems we’ve got built around mental property.

Advertisement

This is a fairly clear query: let’s allow the systems we own, operate and take advantage of to freely use the worthwhile work of others without compensation. To be fair, this part is “in the same way as people” because people design, run and implement these systems, and these people don’t desire to pay for something they do not have to, and they don’t desire to. I don’t desire regulations to alter that .

There are many other recommendations in this small policy document, which were little question covered in greater detail in the versions sent on to lawmakers and regulators through official lobbying channels.

Some of the ideas are undoubtedly good, if slightly selfish: “fund digital literacy programs that help people understand how to use artificial intelligence tools to create and access information.” Good! Of course, the authors invest heavily in these tools. Support “Open Data Commons – collections of accessible data managed in the public interest.” Great! “Examine procurement practices to enable more startups to sell technology to the government.” Excellent!

But these more general, positive recommendations are something the industry sees yearly: invest in public resources and speed up government processes. These tasty but irrelevant suggestions are merely tools for the more vital ones I described above.

Advertisement

Ben Horowitz, Brad Smith, Marc Andreessen and Satya Nadella want the federal government to step back from regulating this lucrative recent development, let industry resolve which regulations are value compromising, and invalidate copyright laws in a way that kind of acts as a blanket reprieve for illegal or unethical practices that many imagine have enabled the rapid development of artificial intelligence. These are principles that are vital to them, whether children are acquiring digital skills or not.

This article was originally published on : techcrunch.com

Technology

One of the last AI Google models is worse in terms of safety

Published

on

By

The Google Gemini generative AI logo on a smartphone.

The recently released Google AI model is worse in some security tests than its predecessor, in line with the company’s internal comparative test.

IN Technical report Google, published this week, reveals that his Flash Gemini 2.5 model is more likely that he generates a text that violates its security guidelines than Gemini 2.0 Flash. In two indicators “text security for text” and “image security to the text”, Flash Gemini 2.5 will withdraw 4.1% and 9.6% respectively.

Text safety for the text measures how often the model violates Google guidelines, making an allowance for the prompt, while image security to the text assesses how close the model adheres to those boundaries after displaying the monitors using the image. Both tests are automated, not supervised by man.

Advertisement

In an e-mail, Google spokesman confirmed that Gemini 2.5 Flash “performs worse in terms of text safety for text and image.”

These surprising comparative results appear when AI is passing in order that their models are more acceptable – in other words, less often refuse to answer controversial or sensitive. In the case of the latest Llam Meta models, he said that he fought models in order to not support “some views on others” and answers to more “debated” political hints. Opeli said at the starting of this yr that he would improve future models, in order to not adopt an editorial attitude and offers many prospects on controversial topics.

Sometimes these efforts were refundable. TechCrunch announced on Monday that the default CHATGPT OPENAI power supply model allowed juvenile to generate erotic conversations. Opeli blamed his behavior for a “mistake”.

According to Google Technical Report, Gemini 2.5 Flash, which is still in view, follows instructions more faithfully than Gemini 2.0 Flash, including instructions exceeding problematic lines. The company claims that regression might be partially attributed to false positives, but in addition admits that Gemini 2.5 Flash sometimes generates “content of violation” when it is clearly asked.

Advertisement

TechCrunch event

Berkeley, California
|.
June 5

Book now

Advertisement

“Of course, there is a tension between (after instructions) on sensitive topics and violations of security policy, which is reflected in our assessment,” we read in the report.

The results from Meepmap, reference, which can examine how models react to sensitive and controversial hints, also suggest that Flash Gemini 2.5 is much less willing to refuse to reply controversial questions than Flash Gemini 2.0. Testing the TechCrunch model through the AI ​​OpenRoutter platform has shown that he unsuccessfully writes essays to support human artificial intelligence judges, weakening the protection of due protection in the US and the implementation of universal government supervisory programs.

Thomas Woodside, co -founder of the Secure AI Project, said that the limited details given by Google in their technical report show the need for greater transparency in testing models.

“There is a compromise between the instruction support and the observation of politics, because some users may ask for content that would violate the rules,” said Woodside Techcrunch. “In this case, the latest Flash model Google warns the instructions more, while breaking more. Google does not present many details about specific cases in which the rules have been violated, although they claim that they are not serious. Not knowing more, independent analysts are difficult to know if there is a problem.”

Advertisement

Google was already under fire for his models of security reporting practices.

The company took weeks to publish a technical report for the most talented model, Gemini 2.5 Pro. When the report was finally published, it initially omitted the key details of the security tests.

On Monday, Google published a more detailed report with additional security information.

(Tagstotransate) Gemini

Advertisement
This article was originally published on : techcrunch.com
Continue Reading

Technology

Aurora launches a commercial self -propelled truck service in Texas

Published

on

By

The autonomous startup of the Aurora Innovation vehicle technology claims that it has successfully launched a self -propelled truck service in Texas, which makes it the primary company that she implemented without drivers, heavy trucks for commercial use on public roads in the USA

The premiere appears when Aurora gets the term: In October, the corporate delayed the planned debut 2024 to April 2025. The debut also appears five months after the rival Kodiak Robotics provided its first autonomous trucks to clients commercial for operations without a driver in field environments.

Aurora claims that this week she began to freight between Dallas and Houston with Hirschbach Motor Lines and Uber Freight starters, and that she has finished 1200 miles without a driver to this point. The company plans to expand to El Paso and Phoenix until the top of 2025.

Advertisement

TechCrunch contacted for more detailed information concerning the premiere, for instance, the variety of vehicles implemented Aurora and whether the system needed to implement the Pullover maneuver or the required distant human assistance.

The commercial premiere of Aurora takes place in a difficult time. Self -propelled trucks have long been related to the necessity for his or her technology attributable to labor deficiencies in the chairman’s transport and the expected increase in freigh shipping. Trump’s tariffs modified this attitude, not less than in a short period. According to the April analytical company report from the commercial vehicle industry ACT researchThe freight is predicted to fall this yr in the USA with a decrease in volume and consumer expenditure.

Aurora will report its results in the primary quarter next week, i.e. when he shares how he expects the present trade war will affect his future activity. TechCrunch contacted to learn more about how tariffs affect Auror’s activities.

For now, Aurora will probably concentrate on further proving his safety case without a driver and cooperation with state and federal legislators to just accept favorable politicians to assist her develop.

Advertisement

TechCrunch event

Berkeley, California
|.
June 5

Book now

Advertisement

At the start of 2025, Aurora filed a lawsuit against federal regulatory bodies after the court refused to release the appliance for release from the protection requirement, which consists in placing warning triangles on the road, when the truck must stop on the highway – something that’s difficult to do when there isn’t a driver in the vehicle. To maintain compliance with this principle and proceed to totally implement without service drivers, Aurora probably has a man -driven automotive trail after they are working.

(Tagstranslate) Aurora Innovation

This article was originally published on : techcrunch.com
Continue Reading

Technology

Sarah Tavel, the first woman of the Benchmark GP, goes to the Venture partner

Published

on

By

Eight years after joining Benchmark as the company’s first partner, Sarah Tavel announced that she was going to a more limited role at Hapeure Venture.

In his latest position as a partner Venture Tavel will proceed to invest and serve existing company boards, but may have more time to examine “AI tools on the edge” and fascinated with the direction of artificial intelligence, she wrote.

Tavel joined Benchmark in 2017 after spending a half years as a partner in Greylock and three years as a product manager at Pinterest. Before Pinterest, Tavel was an investor in Bessemer Venture Partners, where she helped Source Pinterest and Github.

Advertisement

Since its foundation in 1995, the benchmark intentionally maintained a small team of six or fewer general partners. Unlike most VC corporations, wherein older partners normally receive most of the management and profits fees, the benchmark acts as an equal partnership, and all partners share fees and returns equally.

During his term as a general partner of Benchmark, Tavel invested in Hipcamp on the campsite, chains of cryptocurrency intelligence startups and the Supergreaty cosmetic platform, which was purchased by Whatnot in 2023. Tavel also supported the application for sharing photos of Paparazhi, which closed two years ago, and the AI ​​11x sales platform, about which TechCrunch wrote.

(Tagstotransate) benchmark

This article was originally published on : techcrunch.com
Advertisement
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending