Technology
Microsoft and A16Z are putting aside their differences and joining hands in protest against artificial intelligence regulations

The two biggest forces in two deeply intertwined tech ecosystems – large incumbents and startups – have taken a break from counting money and together they demand this from the federal government to stop even considering regulations that might affect their financial interests or, as they prefer to call it, innovation.
“Our two companies may not agree on everything, but it’s not about our differences,” writes this group with very different perspectives and interests: A16Z founders, partners Marc Andreessen and Ben Horowitz, and Microsoft CEO Satya Nadella and president/director legal affairs Brad Kowal. A very cross-sectional gathering, representing each big business and big money.
But they are supposedly taking care of little boys. That is, all the businesses that may be impacted by this latest try to abuse the regulations: SB 1047.
Imagine being charged a fee for improperly disclosing an open model! A16Z General Partner Anjney Midha he called it a “regressive tax” on startups and a “blatant regulatory capture” by Big Tech firms that, unlike Midha and his impoverished colleagues, could afford the lawyers needed to comply with the regulations.
Except that was all disinformation spread by Andreessen Horowitz and other wealthy interests who actually stood to suffer as supporters of billion-dollar enterprises. In fact, small models and startups would only be barely affected since the proposed law specifically protected them.
It’s strange that the identical form of targeted carve-out for “Little Tech” that Horowitz and Andreessen routinely advocate for was distorted and minimized by the lobbying campaign they and others waged against SB 1047. (In an interview with the bill’s sponsor , California State Senator Scott Wiener talked about this whole thing recently on Disrupt.)
This bill had its problems, but its opposition greatly exaggerated compliance costs and didn’t significantly substantiate claims that it will chill or burden startups.
It’s a part of a longtime pattern in which Big Tech – to which, despite their stance, Andreessen and Horowitz are closely related – operates on the state level, where it could possibly win (as with SB 1047), while asking for federal solutions that it knows will won’t ever come, or which can have no teeth because of partisan bickering and congressional ineptitude on technical issues.
This joint statement of “political opportunity” is the second a part of the sport: After torpedoing SB 1047, they will say they did it solely to support federal policy. Never mind that we’re still waiting for a federal privacy law that tech firms have been pushing for a decade while fighting state laws.
What policies do they support? “A different responsible market approach”, in other words: down with our money, Uncle Sam.
Regulations needs to be based on a “science-based and standards-based approach, recognizing regulatory frameworks that focus on the use and misuse of technology” and should “focus on the risk of bad actors exploiting artificial intelligence.” This signifies that we must always not introduce proactive regulation, but quite reactive penalties when criminals use unregulated products for criminal purposes. This approach has worked great in this whole FTX situation, so I understand why they support it.
“The regulation should only be implemented if the benefits outweigh the costs.” It would take 1000’s of words to clarify all of the ways this idea expressed in this context is funny. But they are principally suggesting that the fox needs to be included on the henhouse planning committee.
Regulators should “allow developers and startups the flexibility to choose AI models to use wherever they build solutions, and not tilt the playing field in favor of any one platform.” This suggests that there may be some agenda requiring permission to make use of one model or one other. Since this is just not the case, it’s a straw man.
Here is a lengthy quote that I have to quote in full:
The right to education: Copyright goals to advertise the progress of science and the applied arts by extending protection to publishers and authors to encourage them to make recent works and knowledge available to the general public, but not on the expense of society’s right to learn from those works. Copyright law mustn’t be co-opted to suggest that machines needs to be prevented from using data – the premise of artificial intelligence – to learn in the identical way as humans. Unprotected knowledge and facts, whether or not contained in protected subject material, should remain free and accessible.
To be clear, the clear statement here is that software operated by billion-dollar corporations has the “right” to access any data since it should give you the chance to learn from it “in the same way as humans.”
First of all, no. These systems are not like people; they generate data in their training data that mimics human activity. These are complex statistical projection programs with a natural language interface. They haven’t any more “right” to any document or fact than Excel.
Second, the concept that “facts” – by which they mean “intellectual property” – are the one thing these systems are interested in, and that some type of fact-gathering cabal is working to forestall them, is an artificial narrative we have seen before. Perplexity made the “facts belong to everyone” argument in its public response to a lawsuit alleging systematic content theft, and its CEO Aravind Srinivas repeated that mistake to me on stage at Disrupt, as in the event that they were being sued for knowing tidbits just like the Earth’s distance from the Moon.
While this is just not the place to totally discuss this particular straw man argument, let me simply indicate that while facts are indeed free agents, there are real costs to how they are created – say, through original reporting and scientific research. This is why copyright and patent systems exist: not to forestall the wide sharing and use of mental property, but to encourage its creation by ensuring that it could possibly be assigned real value.
Copyright law is much from perfect and is more likely to be abused as often as used. However, this is just not “co-opted to suggest that machines should be prevented from using data” – it’s used to be sure that bad actors don’t bypass the worth systems we’ve got built around mental property.
This is a fairly clear query: let’s allow the systems we own, operate and take advantage of to freely use the worthwhile work of others without compensation. To be fair, this part is “in the same way as people” because people design, run and implement these systems, and these people don’t desire to pay for something they do not have to, and they don’t desire to. I don’t desire regulations to alter that .
There are many other recommendations in this small policy document, which were little question covered in greater detail in the versions sent on to lawmakers and regulators through official lobbying channels.
Some of the ideas are undoubtedly good, if slightly selfish: “fund digital literacy programs that help people understand how to use artificial intelligence tools to create and access information.” Good! Of course, the authors invest heavily in these tools. Support “Open Data Commons – collections of accessible data managed in the public interest.” Great! “Examine procurement practices to enable more startups to sell technology to the government.” Excellent!
But these more general, positive recommendations are something the industry sees yearly: invest in public resources and speed up government processes. These tasty but irrelevant suggestions are merely tools for the more vital ones I described above.
Ben Horowitz, Brad Smith, Marc Andreessen and Satya Nadella want the federal government to step back from regulating this lucrative recent development, let industry resolve which regulations are value compromising, and invalidate copyright laws in a way that kind of acts as a blanket reprieve for illegal or unethical practices that many imagine have enabled the rapid development of artificial intelligence. These are principles that are vital to them, whether children are acquiring digital skills or not.
Technology
Trump to sign a criminalizing account of porn revenge and clear deep cabinets

President Donald Trump is predicted to sign the act on Take It Down, a bilateral law that introduces more severe punishments for distributing clear images, including deep wardrobes and pornography of revenge.
The Act criminalizes the publication of such photos, regardless of whether or not they are authentic or generated AI. Whoever publishes photos or videos can face penalty, including a advantageous, deprivation of liberty and restitution.
According to the brand new law, media firms and web platforms must remove such materials inside 48 hours of termination of the victim. Platforms must also take steps to remove the duplicate content.
Many states have already banned clear sexual desems and pornography of revenge, but for the primary time federal regulatory authorities will enter to impose restrictions on web firms.
The first lady Melania Trump lobbyed for the law, which was sponsored by the senators Ted Cruz (R-TEXAS) and Amy Klobuchar (d-minn.). Cruz said he inspired him to act after hearing that Snapchat for nearly a 12 months refused to remove a deep displacement of a 14-year-old girl.
Proponents of freedom of speech and a group of digital rights aroused concerns, saying that the law is Too wide And it will probably lead to censorship of legal photos, similar to legal pornography, in addition to government critics.
(Tagstransate) AI
Technology
Microsoft Nadella sata chooses chatbots on the podcasts

While the general director of Microsoft, Satya Nadella, says that he likes podcasts, perhaps he didn’t take heed to them anymore.
That the treat is approaching at the end longer profile Bloomberg NadellaFocusing on the strategy of artificial intelligence Microsoft and its complicated relations with Opeli. To illustrate how much she uses Copilot’s AI assistant in her day by day life, Nadella said that as a substitute of listening to podcasts, she now sends transcription to Copilot, after which talks to Copilot with the content when driving to the office.
In addition, Nadella – who jokingly described her work as a “E -Mail driver” – said that it consists of a minimum of 10 custom agents developed in Copilot Studio to sum up E -Mailes and news, preparing for meetings and performing other tasks in the office.
It seems that AI is already transforming Microsoft in a more significant way, and programmers supposedly the most difficult hit in the company’s last dismissals, shortly after Nadella stated that the 30% of the company’s code was written by AI.
(Tagstotransate) microsoft
Technology
The planned Openai data center in Abu Dhabi would be greater than Monaco

Opeli is able to help in developing a surprising campus of the 5-gigawatt data center in Abu Dhabi, positioning the corporate because the fundamental tenant of anchor in what can grow to be considered one of the biggest AI infrastructure projects in the world, in accordance with the brand new Bloomberg report.
Apparently, the thing would include a tremendous 10 square miles and consumed power balancing five nuclear reactors, overshadowing the prevailing AI infrastructure announced by OpenAI or its competitors. (Opeli has not yet asked TechCrunch’s request for comment, but in order to be larger than Monaco in retrospect.)
The ZAA project, developed in cooperation with the G42-Konglomerate with headquarters in Abu Zabi- is an element of the ambitious Stargate OpenAI project, Joint Venture announced in January, where in January could see mass data centers around the globe supplied with the event of AI.
While the primary Stargate campus in the United States – already in Abilene in Texas – is to realize 1.2 gigawatts, this counterpart from the Middle East will be more than 4 times.
The project appears among the many wider AI between the USA and Zea, which were a few years old, and annoyed some legislators.
OpenAI reports from ZAA come from 2023 Partnership With G42, the pursuit of AI adoption in the Middle East. During the conversation earlier in Abu Dhabi, the final director of Opeli, Altman himself, praised Zea, saying: “He spoke about artificial intelligence Because it was cool before. “
As in the case of a big a part of the AI world, these relationships are … complicated. Established in 2018, G42 is chaired by Szejk Tahnoon Bin Zayed Al Nahyan, the national security advisor of ZAA and the younger brother of this country. His embrace by OpenAI raised concerns at the top of 2023 amongst American officials who were afraid that G42 could enable the Chinese government access advanced American technology.
These fears focused on “G42”Active relationships“With Blalisted entities, including Huawei and Beijing Genomics Institute, in addition to those related to people related to Chinese intelligence efforts.
After pressure from American legislators, CEO G42 told Bloomberg At the start of 2024, the corporate modified its strategy, saying: “All our Chinese investments that were previously collected. For this reason, of course, we no longer need any physical presence in China.”
Shortly afterwards, Microsoft – the fundamental shareholder of Opeli together with his own wider interests in the region – announced an investment of $ 1.5 billion in G42, and its president Brad Smith joined the board of G42.
(Tagstransate) Abu dhabi
-
Press Release1 year ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Press Release1 year ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Business and Finance12 months ago
The Importance of Owning Your Distribution Media Platform
-
Business and Finance1 year ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump1 year ago
Another lawsuit accuses Google of bias against Black minority employees
-
Theater1 year ago
Telling the story of the Apollo Theater
-
Ben Crump1 year ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests
-
Ben Crump1 year ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Theater1 year ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Theater12 months ago
Cultural icon Apollo Theater sets new goals on the occasion of its 85th anniversary