Connect with us

Technology

Photo sharing community EyeEm will license users’ photos to train AI if they don’t delete them

Published

on

EyeEma Berlin-based photo-sharing community that moved to a Spanish company last 12 months Freepik post-bankruptcy, it now licenses its users’ photos to train artificial intelligence models. Earlier this month, the corporate told users via email that it was adding a brand new clause to its Terms of Service that might grant it rights to submit user content to “train, develop and improve machine learning software, algorithms and models.” Users had 30 days to opt out to have all their content faraway from the EyeEm platform. Otherwise, they consented to this use case of their work.

At the time of the acquisition in 2023, EyeEm’s photo library contained 160 million images and nearly 150,000 users. The company has stated that it will merge its community with Freepik’s over time. According to data from the web site, despite the decline in its popularity, almost 30,000 people download it every month Applications.

Once considered a possible competitor to Instagram – or no less than the “European Instagram” – EyeEm shrunk to three employees before selling to Freepik, TechCrunch’s Ingrid Lunden previously reported. Joaquin Cuenca Abela, CEO of Freepik, hinted at the corporate’s possible plans for EyeEm, saying it could explore how to provide more artificial intelligence to creators on the platform.

Advertisement

As it seems, this meant selling his work to train artificial intelligence models.

Now EyeEm updated Terms reads as follows:

Section 13 details the complicated removal process, which begins with directly deleting photos – which doesn’t affect content previously shared on EyeEm magazine or social media, the corporate notes. To remove content from EyeEm Market (where photographers sold their photos) or other content platforms, users would have to send a request to support@eyeem.com and supply the content ID numbers of the photos they want removed and whether it needs to be removed. also faraway from their account or only from the EyeEm marketplace.

Advertisement

It is price noting that the notice indicates that it might take up to 180 days for data to be faraway from the EyeEm marketplace and partner platforms. Yes, that is right: it takes up to 180 days to remove requests, but users only have 30 days to opt out. This means your only option is to manually delete photos one after the other.

Worse yet, the corporate adds that:

Section 8 details licensing rights for AI training. In Section 10, EyeEm informs users that if they delete their account, they will be giving up their right to any compensation for his or her work – something users might want to consider to avoid having their data transferred to AI models.

EyeEm’s move is an example of how AI models are trained on users’ content, sometimes without their explicit consent. Although EyeEm offered an opt-out procedure of sorts, any photographer who missed this announcement would lose the best to dictate how their photos were utilized in the long run. Given that EyeEm’s status as a preferred alternative to Instagram has declined significantly through the years, many photographers can have forgotten they ever used it. They actually could have ignored the message if it wasn’t already of their spam folder somewhere.

Those who noticed the changes were upset that they only received 30 days’ notice and no options to bulk delete your entrieswhich makes giving up more painful.

Advertisement

Requests for comment sent to EyeEm weren’t immediately confirmed, but provided that the countdown was 30 days, we decided to post them before receiving a response.

This kind of unfair behavior is why users today are considering switching to an open social network. federation platform, Pixel powerwhich runs on the identical ActivityPub protocol as Mastodon, takes advantage of EyeEm’s situation to attract users.

Advertisement

In a post on his official Pixelfed account announced “We will never use your images to train artificial intelligence models. Privacy first, pixels forever.”


This article was originally published on : techcrunch.com
Advertisement

Technology

PO clarous Director General Zoom also uses AI avatar during a quarterly connection

Published

on

By

Zoom CEO Eric Yuan

General directors at the moment are so immersed in artificial intelligence that they send their avatars to cope with quarterly connections from earnings as a substitute, a minimum of partly.

After AI Avatar CEO CEO appeared on the investor’s conversation firstly of this week, the final director of Zoom Eric Yuan also followed them, also Using his avatar for preliminary comments. Yuan implemented his non -standard avatar via Zoom Clips, an asynchronous company video tool.

“I am proud that I am one of the first general directors who used the avatar in a call for earnings,” he said – or fairly his avatar. “This is just one example of how Zoom shifts the limits of communication and cooperation. At the same time, we know that trust and security are necessary. We take seriously the content generated AI and build strong security to prevent improper use, protect the user’s identity and ensure that avatars are used responsibly.”

Advertisement

Yuan has long been in favor of using avatars at meetings and previously said that the corporate goals to create Digital user twins. He just isn’t alone on this vision; The CEO of transcript -powered AI, apparently, trains its own avatar Share the load.

Meanwhile, Zoom said he was doing it Avatar non -standard function available To all users this week.

(Tagstranslat) meetings AI

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

The next large Openai plant will not be worn: Report

Published

on

By

Sam Altman speaks onstage during The New York Times Dealbook Summit 2024.

Opeli pushed generative artificial intelligence into public consciousness. Now it might probably develop a very different variety of AI device.

According to WSJ reportThe general director of Opeli, Altman himself, told employees on Wednesday that one other large product of the corporate would not be worn. Instead, it will be compact, without the screen of the device, fully aware of the user’s environment. Small enough to sit down on the desk or slot in your pocket, Altman described it each as a “third device” next to MacBook Pro and iPhone, in addition to “Comrade AI” integrated with on a regular basis life.

The preview took place after the OpenAI announced that he was purchased by IO, a startup founded last 12 months by the previous Apple Joni Ive designer, in a capital agreement value $ 6.5 billion. I will take a key creative and design role at Openai.

Advertisement

Altman reportedly told employees that the acquisition can ultimately add 1 trillion USD to the corporate conveyorsWearing devices or glasses that got other outfits.

Altman reportedly also emphasized to the staff that the key would be crucial to stop the copying of competitors before starting. As it seems, the recording of his comments leaked to the journal, asking questions on how much he can trust his team and the way rather more he will be able to reveal.

(Tagstotransate) devices

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

The latest model AI Google Gemma can work on phones

Published

on

By

It grows “open” AI Google, Gemma, grows.

While Google I/O 2025 On Tuesday, Google removed Gemma 3N compresses, a model designed for “liquid” on phones, laptops and tablets. According to Google, available in a preview starting on Tuesday, Gemma 3N can support sound, text, paintings and flicks.

Models efficient enough to operate in offline mode and without the necessity to calculate within the cloud have gained popularity within the AI ​​community lately. They will not be only cheaper to make use of than large models, but they keep privacy, eliminating the necessity to send data to a distant data center.

Advertisement

During the speech to I/O product manager, Gemma Gus Martins said that GEMMA 3N can work on devices with lower than 2 GB of RAM. “Gemma 3N shares the same architecture as Gemini Nano, and is also designed for incredible performance,” he added.

In addition to Gemma 3N, Google releases Medgemma through the AI ​​developer foundation program. According to Medgemma, it’s essentially the most talented model to research text and health -related images.

“Medgemma (IS) OUR (…) A collection of open models to understand the text and multimodal image (health),” said Martins. “Medgemma works great in various imaging and text applications, thanks to which developers (…) could adapt the models to their own health applications.”

Also on the horizon there may be SignGEMMA, an open model for signaling sign language right into a spoken language. Google claims that Signgemma will allow programmers to create recent applications and integration for users of deaf and hard.

Advertisement

“SIGNGEMMA is a new family of models trained to translate sign language into a spoken text, but preferably in the American sign and English,” said Martins. “This is the most talented model of understanding sign language in history and we are looking forward to you-programmers, deaf and hard communities-to take this base and build with it.”

It is value noting that Gemma has been criticized for non -standard, non -standard license conditions, which in accordance with some developers adopted models with a dangerous proposal. However, this didn’t discourage programmers from downloading Gemma models tens of tens of millions of times.

.

(Tagstransate) gemma

Advertisement
This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending