Connect with us

Technology

Microsoft and A16Z are putting aside their differences and joining hands in protest against artificial intelligence regulations

Published

on

Image of a computer, phone and clock on a desk tied in red tape.

The two biggest forces in two deeply intertwined tech ecosystems – large incumbents and startups – have taken a break from counting money and together they demand this from the federal government to stop even considering regulations that might affect their financial interests or, as they prefer to call it, innovation.

“Our two companies may not agree on everything, but it’s not about our differences,” writes this group with very different perspectives and interests: A16Z founders, partners Marc Andreessen and Ben Horowitz, and Microsoft CEO Satya Nadella and president/director legal affairs Brad Kowal. A very cross-sectional gathering, representing each big business and big money.

But they are supposedly taking care of little boys. That is, all the businesses that may be impacted by this latest try to abuse the regulations: SB 1047.

Imagine being charged a fee for improperly disclosing an open model! A16Z General Partner Anjney Midha he called it a “regressive tax” on startups and a “blatant regulatory capture” by Big Tech firms that, unlike Midha and his impoverished colleagues, could afford the lawyers needed to comply with the regulations.

Except that was all disinformation spread by Andreessen Horowitz and other wealthy interests who actually stood to suffer as supporters of billion-dollar enterprises. In fact, small models and startups would only be barely affected since the proposed law specifically protected them.

It’s strange that the identical form of targeted carve-out for “Little Tech” that Horowitz and Andreessen routinely advocate for was distorted and minimized by the lobbying campaign they and others waged against SB 1047. (In an interview with the bill’s sponsor , California State Senator Scott Wiener talked about this whole thing recently on Disrupt.)

This bill had its problems, but its opposition greatly exaggerated compliance costs and didn’t significantly substantiate claims that it will chill or burden startups.

It’s a part of a longtime pattern in which Big Tech – to which, despite their stance, Andreessen and Horowitz are closely related – operates on the state level, where it could possibly win (as with SB 1047), while asking for federal solutions that it knows will won’t ever come, or which can have no teeth because of partisan bickering and congressional ineptitude on technical issues.

This joint statement of “political opportunity” is the second a part of the sport: After torpedoing SB 1047, they will say they did it solely to support federal policy. Never mind that we’re still waiting for a federal privacy law that tech firms have been pushing for a decade while fighting state laws.

What policies do they support? “A different responsible market approach”, in other words: down with our money, Uncle Sam.

Regulations needs to be based on a “science-based and standards-based approach, recognizing regulatory frameworks that focus on the use and misuse of technology” and should “focus on the risk of bad actors exploiting artificial intelligence.” This signifies that we must always not introduce proactive regulation, but quite reactive penalties when criminals use unregulated products for criminal purposes. This approach has worked great in this whole FTX situation, so I understand why they support it.

“The regulation should only be implemented if the benefits outweigh the costs.” It would take 1000’s of words to clarify all of the ways this idea expressed in this context is funny. But they are principally suggesting that the fox needs to be included on the henhouse planning committee.

Regulators should “allow developers and startups the flexibility to choose AI models to use wherever they build solutions, and not tilt the playing field in favor of any one platform.” This suggests that there may be some agenda requiring permission to make use of one model or one other. Since this is just not the case, it’s a straw man.

Here is a lengthy quote that I have to quote in full:

The right to education: Copyright goals to advertise the progress of science and the applied arts by extending protection to publishers and authors to encourage them to make recent works and knowledge available to the general public, but not on the expense of society’s right to learn from those works. Copyright law mustn’t be co-opted to suggest that machines needs to be prevented from using data – the premise of artificial intelligence – to learn in the identical way as humans. Unprotected knowledge and facts, whether or not contained in protected subject material, should remain free and accessible.

To be clear, the clear statement here is that software operated by billion-dollar corporations has the “right” to access any data since it should give you the chance to learn from it “in the same way as humans.”

First of all, no. These systems are not like people; they generate data in their training data that mimics human activity. These are complex statistical projection programs with a natural language interface. They haven’t any more “right” to any document or fact than Excel.

Second, the concept that “facts” – by which they mean “intellectual property” – are the one thing these systems are interested in, and that some type of fact-gathering cabal is working to forestall them, is an artificial narrative we have seen before. Perplexity made the “facts belong to everyone” argument in its public response to a lawsuit alleging systematic content theft, and its CEO Aravind Srinivas repeated that mistake to me on stage at Disrupt, as in the event that they were being sued for knowing tidbits just like the Earth’s distance from the Moon.

While this is just not the place to totally discuss this particular straw man argument, let me simply indicate that while facts are indeed free agents, there are real costs to how they are created – say, through original reporting and scientific research. This is why copyright and patent systems exist: not to forestall the wide sharing and use of mental property, but to encourage its creation by ensuring that it could possibly be assigned real value.

Copyright law is much from perfect and is more likely to be abused as often as used. However, this is just not “co-opted to suggest that machines should be prevented from using data” – it’s used to be sure that bad actors don’t bypass the worth systems we’ve got built around mental property.

This is a fairly clear query: let’s allow the systems we own, operate and take advantage of to freely use the worthwhile work of others without compensation. To be fair, this part is “in the same way as people” because people design, run and implement these systems, and these people don’t desire to pay for something they do not have to, and they don’t desire to. I don’t desire regulations to alter that .

There are many other recommendations in this small policy document, which were little question covered in greater detail in the versions sent on to lawmakers and regulators through official lobbying channels.

Some of the ideas are undoubtedly good, if slightly selfish: “fund digital literacy programs that help people understand how to use artificial intelligence tools to create and access information.” Good! Of course, the authors invest heavily in these tools. Support “Open Data Commons – collections of accessible data managed in the public interest.” Great! “Examine procurement practices to enable more startups to sell technology to the government.” Excellent!

But these more general, positive recommendations are something the industry sees yearly: invest in public resources and speed up government processes. These tasty but irrelevant suggestions are merely tools for the more vital ones I described above.

Ben Horowitz, Brad Smith, Marc Andreessen and Satya Nadella want the federal government to step back from regulating this lucrative recent development, let industry resolve which regulations are value compromising, and invalidate copyright laws in a way that kind of acts as a blanket reprieve for illegal or unethical practices that many imagine have enabled the rapid development of artificial intelligence. These are principles that are vital to them, whether children are acquiring digital skills or not.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Nigerian technology company Moniepoint secures $110 million investment

Published

on

By


Moniepoint, a Nigerian fintech company that serves thousands and thousands of entrepreneurs across Africa, has secured $110 million in financing for expand your corporation. The financing got here from 4 entities: London Development Partners International and Lightrock, which had previously invested within the company; The recent investors are the Google African Investment Fund and Verod Capital.

Moniepoint – formerly called TeamApt –was named fastest growing fintech company in accordance with Financial Times for 2 years.

The company provides quite a lot of financial services, including bank accounts, loans and business management services. It processes over 800 million transactions monthly value over PLN 7 billion.

Moniepointe co-founder Tosin Eniolorunda says the brand new investor will provide financing help the company improve customer experience.

“Our mission is to assist our customers solve their challenges by making our platform more progressive, transparent and secure. Proceeds from this fundraising will speed up our efforts towards financial inclusion and supporting Africa’s entrepreneurial potential. I need to sincerely thank all the Moniepoint team for making this achievement possible, ” – said Eniolorunda in a press release shared Afrotech.

Adefolarin Ogunsanya, Partner at Development Partners International, also expressed his excitement to contribute to the event of Moniepoint.

“We are delighted to steer this investment round in Moniepoint, one of the vital exciting and fastest growing corporations in Africa. As a profitable company led by a wonderful management team with a transparent strategic vision, Moniepoint is well positioned to proceed its impressive growth trajectory while ensuring financial inclusion for vulnerable businesses and individuals across Africa, Afrotech reports.

Moniepoint officials say they are going to proceed to prioritize Nigeria, however the company plans to expand into other African countries. The company is currently assessing these markets to evaluate how effectively it may well operate meet customer needs.

The successful fintech company has previously received support from QED Investors, British International Investment (BII) and Endeavor Catalyst. Since its founding in 2015, the company has raised over $180 million.


This article was originally published on : www.blackenterprise.com
Continue Reading

Technology

Indonesia blocks Google Pixel sales after iPhone 16 ban

Published

on

By

Indonesia has banned the sale of Google Pixel smartphones for failing to satisfy national content requirements, days after blocking Apple’s iPhone 16 in (*16*) Asia’s largest phone market.

Indonesia’s Ministry of Industry said Google phones can’t be traded until they meet regulations requiring smartphones sold in Indonesia to have 40% local content.

Before resuming sales, Google must obtain certification for local content, Industry Ministry spokesman Febri Hendri Antoni Arief told local reporters. “The Local Content Policy and related policies aim to be fair to all investors investing in Indonesia and to add value and deepen the industry structure in the country,” Hendri was quoted as saying.

The ban follows Indonesia block regarding the sale of the iPhone 16 last week after Apple defaulted on a $95 million investment commitment. Major smartphone makers must produce devices, develop firmware or spend money on local innovation to satisfy Indonesian content rules.

Indonesian regulations require tech firms to source 40% of cell phone and tablet components from inside the country, which may be met through local manufacturing, firmware development or direct investment in innovation projects.

Companies can meet the necessities in various ways. For example, Samsung and Xiaomi opened production plants, and Apple decided to open developer academies.

The regulation, enforced through a certification system called “local content level”, is a component of Indonesia’s broader industrial policy to leverage its large consumer marketplace for national economic development. Companies that fail to satisfy these thresholds face sales restrictions.

Neither Google nor Apple are among the many top five smartphone brands in Indonesia, based on research firm Counterpoint.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Hiring managers reject AI-generated job offers from job seekers

Published

on

By


Statistics show that many job seekers exaggerate or falsely provide details on their CVs, and an increasing variety of hiring managers take a dim view of individuals using artificial intelligence in job applications.

A brand new survey from the CV Genius research team revealed that 80% of hiring managers dislike AI-generated cover letters and resumes, and 74% say they’ll tell when AI has been utilized in a job application. Hiring managers prefer human-written applications over AI and consider that candidates using AI are perceived as repetitive, generic, and lazy.

A survey of 625 hiring managers found that over half (57%) are much less prone to hire a candidate who has used AI of their application, and should disqualify a candidate altogether if they think AI has been used.

“Job seekers must learn to use AI as an asset, not a shortcut.” This is Ethan David Lee, profession expert at CV Genius. “Hiring managers don’t mind AI in apps, but when it’s used carelessly, the result feels impersonal and unremarkable.”

“In the world of artificial intelligence, it is more important than ever for candidates to show their human side,” Lee added. “This doesn’t mean job seekers shouldn’t use AI, but they need to use it carefully if they want it to improve their chances.”

In response to the growing variety of job seekers using artificial intelligence of their job search, CV Genius released as, offering recommendations on tips on how to use AI to enhance applications without raising red flags for hiring managers. Highlighting that AI may be helpful if done thoughtfully, the guide offers six suggestions to assist job seekers effectively use AI to enhance their applications.

Avoid relying solely on AI

Artificial intelligence should support, not replace, your job application efforts. While it’s superb to make use of AI as a writing aid, ensure each application is tailored to your specific role and company.

Check for exaggerations and inaccuracies

AI’s tendency to exaggerate or fabricate achievements and experiences can hurt you in a job interview. Always fact-check your AI-generated CV and canopy letter to make sure accuracy. If you secure an interview, be able to support every claim made in your application.

Include personal experiences and specific examples

AI often uses generic phrases, which can lead to CVs and canopy letters appearing polished but lacking specific evidence. Recruiters recommend avoiding this error by adding personal elements that Autobot AI cannot generate.

Avoid using common AI typing patterns

AI-generated content often shows consistent patterns, including easy, formal writing styles and repeated phrases. When using artificial intelligence to create a CV and canopy letter, it’s crucial to review and edit the generated content and replace any words or phrases that appear repeatedly or seem out of context.

Make sure your wording/vocabulary is consistent in your CV, cover letter and interview

Another sign of AI-generated content is a mismatch in writing tone between your CV and canopy letter, which might make it difficult to match the personality of your AI-generated application during a real-world job interview.

Use AI Checkers to review your CV and canopy letter

To prevent your applications from being rejected, run them through multiple AI detection tools before submitting and check all marked sections to make sure they match your unique voice and elegance.


This article was originally published on : www.blackenterprise.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending