Connect with us

Technology

Elon Musk’s X Could Still Be Sanctioned for Training Grok on Europeans’ Data

Published

on

Elon Musk’s X could still face sanctions for training Grok on Europeans’ data

Earlier this week, the EU’s top privacy regulator wrapped up legal proceedings over how X processed user data to coach its Grok AI chatbot, however the saga isn’t over yet for the Elon Musk-owned social media platform formerly referred to as Twitter. Ireland’s Data Protection Commission (DPC) confirmed to TechCrunch that it has received — and can “investigate” — plenty of complaints filed under the General Data Protection Regulation (GDPR).

“The DPC will now investigate the extent to which any processing that took place complies with the relevant provisions of the GDPR,” the regulator told TechCrunch. “If, as a result of that investigation, it is determined that TUIC (Twitter International Unlimited Company, as X’s principal Irish subsidiary is still called) has breached the GDPR, the DPC will consider whether the exercise of any of the remedial powers is justified, and if so, which one(s).”

X agreed to suspend processing data for Grok training in early August. X’s commitment was then made everlasting earlier this week. That agreement required X to delete and stop using European user data for training its AIs that it collected between May 7, 2024, and August 1, 2024, based on a replica obtained by TechCrunch. However, it’s now clear that X has no obligation to delete any AI models trained on that data.

So far, X has not faced any sanctions from the DPC for processing Europeans’ personal data to coach Grok without people’s consent – ​​despite urgent legal motion by the DPC to dam the info collection. Fines under the GDPR will be severe, reaching as much as 4% of world annual turnover. (Considering that the corporate’s revenues are currently free falland will struggle to succeed in $500 million this yr, judging by its published quarterly results, which could possibly be particularly painful.)

Regulators even have the facility to order operational changes by demanding that violations stop. However, investigating complaints and enforcing them can take a protracted time—even years.

This is significant because while X has been forced to stop using Europeans’ data to coach Grok, it could actually still run any AI models it has already trained on data from individuals who haven’t consented to its use — with none urgent intervention or sanctions to forestall this.

When asked whether the commitment the DPC obtained from X last month required X to delete any AI models trained on Europeans’ data, the DPC confirmed to us that it didn’t: “The commitment does not require TUIC to take that action; it required TUIC to permanently cease processing the datasets covered by the commitment,” the spokesperson said.

Some might say it is a clever way for X (or other model trainers) to get around EU privacy rules: Step 1) Silently use people’s data; Step 2) use it to coach AI models; and — when the cat’s out of the bag and regulators eventually come knocking — commit to deleting the *data* while leaving the trained AI models intact. Step 3) Profit based on Grok!?

When asked about this risk, the DPC replied that the aim of the urgent legal proceedings was to handle “significant concerns” that X’s processing of EU and EEA user data to coach Grok “gave rise to risks to the fundamental rights and freedoms of data subjects”. However, it didn’t explain why it didn’t have the identical urgent concerns concerning the risks to the basic rights and freedoms of Europeans resulting from their information being placed on Grok.

Generative AI tools are notorious for producing false information. Musk’s inversion of that category can be deliberately rude—or “anti-woke,” as he calls it. That could raise the stakes within the sorts of content it could produce about users whose data was ingested to coach the bot.

One reason the Irish regulator could also be more cautious about tips on how to take care of the difficulty is that these AI tools are still relatively latest. There can be uncertainty amongst European privacy watchdogs about tips on how to implement GDPR on such a brand new technology. It can be unclear whether the regulation’s powers would come with the power to order the deletion of an AI model if the technology was trained on data processed unlawfully.

However, as complaints on this area proceed to multiply, data protection authorities will ultimately need to face the issue of artificial intelligence.

Cucumber spoilage

In separate news overnight Friday, the news emerged that the pinnacle of Global X has left. Reuters Agency announced the departure of long-time worker Nick Pickles, a British national who spent a decade at Twitter and rose through the ranks during Musk’s tenure.

IN write to XPickles says he made the choice to go away “a few months ago” but didn’t provide details on his reasons for leaving.

But it’s clear the corporate has quite a bit on its plate — including coping with a ban in Brazil and the political backlash within the U.K. over its role in spreading disinformation related to last month’s unrest there, with Musk having a private penchant for adding fuel to the fireplace (including posting on X suggesting that “civil war is inevitable” for the U.K.).

In the EU, X can be under investigation under the bloc’s content moderation framework, with the primary batch of complaints concerning the Digital Services Act filed in July. Musk was also recently singled out for a private warning in an open letter written by the bloc’s Internal Markets Commissioner, Thierry Breton — to which the chaos-loving billionaire decided to reply with an offensive meme.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Introducing the Next Wave of Startup Battlefield Judges at TechCrunch Disrupt 2024

Published

on

By

Announcing our next wave of Startup Battlefield judges at TechCrunch Disrupt 2024

Startup Battlefield 200 is the highlight of every Disrupt, and we will’t wait to search out out which of the 1000’s of startups which have invited us to collaborate can have the probability to pitch to top enterprise capitalists at TechCrunch Disrupt 2024. Join us at Moscone West in San Francisco October 28–30 for an epic showdown where everyone can have the probability to make a major impact.

Get insight into what the judges are in search of in a profitable company as they supply detailed feedback on the evaluation criteria. Don’t miss the opportunity to learn from their expert insights and discover the key characteristics that result in startup success, only at Disrupt 2024.

We’re excited to introduce our next group of investors who will evaluate startups and dive into each pitch in an in-depth and insightful Q&A session. Stay tuned for more big names coming soon!

Alice Brooks, Partner, Khosla Ventures

Alicja is a partner in Khosla’s ventures interests in sustainability, food, agriculture, and manufacturing/supply chain. She has worked with multiple startups in robotics, IoT, retail, consumer goods, and STEM education, and led mechanical, electrical, and application development teams in the US and Asia. She also founded and managed manufacturing operations in factories in China and Taiwan. Prior to KV, Alice was the founder and CEO of Roominate, a STEM education company that helps girls learn engineering concepts through play.

Mark Crane, Partner, General Catalyst

Mark Crane is a partner at General Catalysta enterprise capital firm that works with founders from seed to endurance to assist them construct corporations that may stand the test of time. Focused on acquiring and investing in later-stage investment opportunities equivalent to AuthZed, Bugcrowd, Resilience, and TravelPerk. Prior to joining General Catalyst, Mark was a vice chairman at Cove Hill Partners in Massachusetts. Prior to that, he was a senior associate at JMI Equity and an associate at North Bridge Growth Equity.

Sofia Dolfe, Partner, Index Ventures

Sofia partners with founders who use their unique perspective and private understanding of the problem to construct corporations that drive behavioral change, powerful network effects, and transform entire industries, from grocery and e-commerce to financial services and healthcare. Sofia can also be one of Index projects‘ gaming leads, working with some of the best gaming corporations in Europe, making a recent generation of iconic gaming titles. He spends most of his time in the Nordics, but works with entrepreneurs across the continent.

Christine Esserman, Partner, Accel

Christine Esserman joined Acceleration in 2017 and focuses on software, web, and mobile technology corporations. Since joining Accel, Christine has helped lead Accel’s investments in Blackpoint Cyber, Linear, Merge, ThreeFlow, Bumble, Remote, Dovetail, Ethos, Guru, and Headway. Prior to joining Accel, Christine worked in product and operations roles at multiple startups. A native of the Bay Area, Christine graduated from the Wharton School at the University of Pennsylvania with a level in Finance and Operations.

Haomiao Huang, Founding Partner, Matter Venture Partners

Haomiao from Venture Matter Partners is a robotics researcher turned founder turned investor. He is especially obsessed with corporations that bring digital innovation to physical economy enterprises, with a give attention to sectors equivalent to logistics, manufacturing and transportation, and advanced technologies equivalent to robotics and AI. Haomiao spent 4 years investing in hard tech with Wen Hsieh at Kleiner Perkins. He previously founded smart home security startup Kuna, built autonomous cars at Caltech and, as part of his PhD research at Stanford, pioneered the aerodynamics and control of multi-rotor unmanned aerial vehicles. Kuna was part of the Y Combinator Winter 14 cohort.

Don’t miss it!

The Startup Battlefield winner, who will walk away with a $100,000 money prize, can be announced at Disrupt 2024—the epicenter of startups. Join 10,000 attendees to witness this breakthrough moment and see the next wave of tech innovation.

Register here and secure your spot to witness this epic battle of startups.

This article was originally published on : techcrunch.com
Continue Reading

Technology

India Considers Easing Market Share Caps for UPI Payments Operators

Published

on

By

phonepe UPI being used to accept payments at a road-side sunglasses stall.

The regulator that oversees India’s popular UPI rail payments is considering relaxing a proposed market share cap for operators like Google Pay, PhonePe and Paytm because it grapples with enforcing the restrictions, two people accustomed to the matter told TechCrunch.

The National Payments Corporation of India (NPCI), which is regulated by the Indian central bank, is considering increasing the market share that UPI operators can hold to greater than 40%, said two of the people, requesting anonymity because the knowledge is confidential. The regulator had earlier proposed a 30% market share limit to encourage competition within the space.

UPI has change into the most well-liked option to send and receive money in India, with the mechanism processing over 12 billion transactions monthly. Walmart-backed PhonePe has about 48% market share by volume and 50% by value, while Google Pay has 37.3% share by volume.

Once an industry heavyweight, Paytm’s market share has fallen to 7.2% from 11% late last yr amid regulatory challenges.

According to several industry executives, the NPCI’s increase in market share limits is more likely to be a controversial move as many UPI providers were counting on regulatory motion to curb the dominance of PhonePe and Google Pay.

NPCI, which has previously declined to comment on market share, didn’t reply to a request for comment on Thursday.

The regulator originally planned to implement the market share caps in January 2021 but prolonged the deadline to January 1, 2025. The regulator has struggled to seek out a workable option to implement its proposed market share caps.

The stakes are high, especially for PhonePe, India’s Most worthy fintech startup, valued at $12 billion.

Sameer Nigam, co-founder and CEO of PhonePe, said last month that the startup cannot go public “if there is uncertainty on regulatory issues.”

“If you buy a share at Rs 100 and value it assuming we have 48-49% market share, there is uncertainty whether it will come down to 30% and when,” Nigam told a fintech conference last month. “We are reaching out to them (the regulator) whether they can find another way to at least address any concerns they have or tell us what the list of concerns is,” he added.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Bluesky addresses trust and security issues related to abuse, spam and more

Published

on

By

Bluesky butterfly logo and Jay Graber

Social media startup Bluesky, which is constructing a decentralized alternative to X (formerly Twitter), provided an update Wednesday on the way it’s approaching various trust and security issues on its platform. The company is in various stages of developing and piloting a variety of initiatives focused on coping with bad actors, harassment, spam, fake accounts, video security and more.

To address malicious users or those that harass others, Bluesky says it’s developing recent tools that can have the option to detect when multiple recent accounts are created and managed by the identical person. This could help curb harassment when a foul actor creates several different personas to attack their victims.

Another recent experiment will help detect “rude” replies and forward them to server moderators. Like Mastodon, Bluesky will support a network where self-hosters and other developers can run their very own servers that connect to Bluesky’s server and others on the network. This federation capability is still in early access. But in the long term, server moderators will have the option to resolve how they need to take care of individuals who post rude responses. In the meantime, Bluesky will eventually reduce the visibility of those responses on its app. Repeated rude labels on content will even lead to account-level labels and suspensions, it says.

To curb using lists to harass others, Bluesky will remove individual users from the list in the event that they block the list creator. Similar functionality was recently introduced to Starter Packs, a sort of shared list that will help recent users find people to follow on the platform (check TechCrunch Starter Pack).

Bluesky will even scan lists with offensive names or descriptions to limit the potential of harassing others by adding them to a public list with a toxic or offensive name or description. Those who violate Bluesky’s Community Guidelines might be hidden from the app until the list owner makes changes that align with Bluesky’s policies. Users who proceed to create offensive lists will even face further motion, though the corporate didn’t provide details, adding that the lists are still an area of ​​energetic discussion and development.

In the approaching months, Bluesky also intends to move to handling moderation reports through its app, using notifications relatively than counting on email reports.

To combat spam and other fake accounts, Bluesky is launching a pilot that can attempt to routinely detect when an account is fake, scamming or sending spam to users. Combined with moderation, the goal is to have the option to take motion on accounts inside “seconds of receiving a report,” the corporate said.

One of the more interesting developments is how Bluesky will comply with local laws while still allowing free speech. It will use geotags that allow it to hide some content from users in a particular area to comply with the law.

“This allows Bluesky’s moderation service to maintain flexibility in creating spaces for free expression while also ensuring legal compliance so that Bluesky can continue to operate as a service in these geographic regions,” the corporate shared in a blog post. “This feature will be rolled out on a country-by-country basis, and we will endeavor to inform users of the source of legal requests when legally possible.”

To address potential trust and safety issues with videos which have recently been added, the team is adding features like the flexibility to disable autoplay, ensuring videos are labeled, and providing the flexibility to report videos. They are still evaluating what else might need to be added, which might be prioritized based on user feedback.

When it comes to abuse, the corporate says its general framework is “a question of how often something happens versus how harmful it is.” The company focuses on addressing high-impact, high-frequency issues, in addition to “tracking edge cases that could result in significant harm to a few users.” The latter, while only affecting a small number of individuals, causes enough “ongoing harm” that Bluesky will take motion to prevent abuse, it says.

User concerns will be reported via reports, emails and mentions @safety.bsky.app account.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending