Connect with us

Technology

OpenAI accidentally deleted potential evidence in NY Times copyright lawsuit (update)

Published

on

OpenAI logo with spiraling pastel colors (Image Credits: Bryce Durbin / TechCrunch)

Lawyers for The New York Times and Daily News, who’re suing OpenAI for allegedly copying their work to coach artificial intelligence models without permission, say OpenAI engineers accidentally deleted potentially relevant data.

Earlier this fall, OpenAI agreed to offer two virtual machines in order that advisors to The Times and Daily News could seek for copyrighted content in their AI training kits. (Virtual machines are software-based computers that exist inside one other computer’s operating system and are sometimes used for testing purposes, backing up data, and running applications.) letterlawyers for the publishers say they and the experts they hired have spent greater than 150 hours since November 1 combing through OpenAI training data.

However, on November 14, OpenAI engineers deleted all publisher search data stored on one among the virtual machines, in keeping with the above-mentioned letter, which was filed late Wednesday in the U.S. District Court for the Southern District of New York.

Advertisement

OpenAI tried to get better the information – and was mostly successful. However, since the folder structure and filenames were “irretrievably” lost, the recovered data “cannot be used to determine where the news authors’ copied articles were used to build the (OpenAI) models,” the letter says.

“The news plaintiffs were forced to recreate their work from scratch, using significant man-hours and computer processing time,” lawyers for The Times and the Daily News wrote. “The plaintiffs of the news learned only yesterday that the recovered data was useless and that the work of experts and lawyers, which took a whole week, had to be repeated, which is why this supplementary letter is being filed today.”

The plaintiffs’ attorney explains that they don’t have any reason to consider the removal was intentional. However, they are saying the incident highlights that OpenAI “is in the best position to search its own datasets” for potentially infringing content using its own tools.

An OpenAI spokesman declined to make an announcement.

Advertisement

However, late Friday, November 22, OpenAI’s lawyer filed a motion answer to a letter sent Wednesday by attorneys to The Times and Daily News. In their response, OpenAI’s lawyers unequivocally denied that OpenAI had deleted any evidence and as a substitute suggested that the plaintiffs were guilty for a system misconfiguration that led to the technical problem.

“Plaintiffs requested that one of several machines provided by OpenAI be reconfigured to search training datasets,” OpenAI’s attorney wrote. “Implementation of plaintiffs’ requested change, however, resulted in the deletion of the folder structure and certain file names from one hard drive – a drive that was intended to serve as a temporary cache… In any event, there is no reason to believe that any files were actually lost.”

In this and other cases, OpenAI maintains that training models using publicly available data – including articles from The Times and Daily News – are permissible. In other words, by creating models like GPT-4o that “learn” from billions of examples of e-books, essays, and other materials to generate human-sounding text, OpenAI believes there isn’t a licensing or other payment required for examples – even when he makes money from these models.

With this in mind, OpenAI has signed licensing agreements with a growing number of recent publishers, including the Associated Press, Business Insider owner Axel Springer, the Financial Times, People’s parent company Dotdash Meredith and News Corp. OpenAI declined to offer the terms of those agreements. offers are public, but one among its content partners, Dotdash, is apparently earns at the least $16 million a 12 months.

Advertisement

OpenAI has not confirmed or denied that it has trained its AI systems on any copyrighted works without permission.

This article was originally published on : techcrunch.com

Technology

PO clarous Director General Zoom also uses AI avatar during a quarterly connection

Published

on

By

Zoom CEO Eric Yuan

General directors at the moment are so immersed in artificial intelligence that they send their avatars to cope with quarterly connections from earnings as a substitute, a minimum of partly.

After AI Avatar CEO CEO appeared on the investor’s conversation firstly of this week, the final director of Zoom Eric Yuan also followed them, also Using his avatar for preliminary comments. Yuan implemented his non -standard avatar via Zoom Clips, an asynchronous company video tool.

“I am proud that I am one of the first general directors who used the avatar in a call for earnings,” he said – or fairly his avatar. “This is just one example of how Zoom shifts the limits of communication and cooperation. At the same time, we know that trust and security are necessary. We take seriously the content generated AI and build strong security to prevent improper use, protect the user’s identity and ensure that avatars are used responsibly.”

Advertisement

Yuan has long been in favor of using avatars at meetings and previously said that the corporate goals to create Digital user twins. He just isn’t alone on this vision; The CEO of transcript -powered AI, apparently, trains its own avatar Share the load.

Meanwhile, Zoom said he was doing it Avatar non -standard function available To all users this week.

(Tagstranslat) meetings AI

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

The next large Openai plant will not be worn: Report

Published

on

By

Sam Altman speaks onstage during The New York Times Dealbook Summit 2024.

Opeli pushed generative artificial intelligence into public consciousness. Now it might probably develop a very different variety of AI device.

According to WSJ reportThe general director of Opeli, Altman himself, told employees on Wednesday that one other large product of the corporate would not be worn. Instead, it will be compact, without the screen of the device, fully aware of the user’s environment. Small enough to sit down on the desk or slot in your pocket, Altman described it each as a “third device” next to MacBook Pro and iPhone, in addition to “Comrade AI” integrated with on a regular basis life.

The preview took place after the OpenAI announced that he was purchased by IO, a startup founded last 12 months by the previous Apple Joni Ive designer, in a capital agreement value $ 6.5 billion. I will take a key creative and design role at Openai.

Advertisement

Altman reportedly told employees that the acquisition can ultimately add 1 trillion USD to the corporate conveyorsWearing devices or glasses that got other outfits.

Altman reportedly also emphasized to the staff that the key would be crucial to stop the copying of competitors before starting. As it seems, the recording of his comments leaked to the journal, asking questions on how much he can trust his team and the way rather more he will be able to reveal.

(Tagstotransate) devices

This article was originally published on : techcrunch.com
Advertisement
Continue Reading

Technology

The latest model AI Google Gemma can work on phones

Published

on

By

It grows “open” AI Google, Gemma, grows.

While Google I/O 2025 On Tuesday, Google removed Gemma 3N compresses, a model designed for “liquid” on phones, laptops and tablets. According to Google, available in a preview starting on Tuesday, Gemma 3N can support sound, text, paintings and flicks.

Models efficient enough to operate in offline mode and without the necessity to calculate within the cloud have gained popularity within the AI ​​community lately. They will not be only cheaper to make use of than large models, but they keep privacy, eliminating the necessity to send data to a distant data center.

Advertisement

During the speech to I/O product manager, Gemma Gus Martins said that GEMMA 3N can work on devices with lower than 2 GB of RAM. “Gemma 3N shares the same architecture as Gemini Nano, and is also designed for incredible performance,” he added.

In addition to Gemma 3N, Google releases Medgemma through the AI ​​developer foundation program. According to Medgemma, it’s essentially the most talented model to research text and health -related images.

“Medgemma (IS) OUR (…) A collection of open models to understand the text and multimodal image (health),” said Martins. “Medgemma works great in various imaging and text applications, thanks to which developers (…) could adapt the models to their own health applications.”

Also on the horizon there may be SignGEMMA, an open model for signaling sign language right into a spoken language. Google claims that Signgemma will allow programmers to create recent applications and integration for users of deaf and hard.

Advertisement

“SIGNGEMMA is a new family of models trained to translate sign language into a spoken text, but preferably in the American sign and English,” said Martins. “This is the most talented model of understanding sign language in history and we are looking forward to you-programmers, deaf and hard communities-to take this base and build with it.”

It is value noting that Gemma has been criticized for non -standard, non -standard license conditions, which in accordance with some developers adopted models with a dangerous proposal. However, this didn’t discourage programmers from downloading Gemma models tens of tens of millions of times.

.

(Tagstransate) gemma

Advertisement
This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending