Technology

Researchers say OpenAI’s Whisper transcription tool has problems with hallucinations

Published

on

According to. software engineers, programmers, and academic researchers have serious concerns about transcription from OpenAI’s Whisper report within the Associated Press.

While there has been no shortage of debate about generative AI’s tendency to hallucinate – essentially make things up – it’s somewhat surprising that that is a problem in transcription, where one would expect the transcription to closely follow the transcribed audio.

Instead, investigators told the AP that Whisper planted every part from racist comments to imaginary treatments within the transcripts. And this could possibly be especially devastating because Whisper is adopted into hospitals and other medical facilities.

A University of Michigan researcher examining public meetings found hallucinations in eight out of 10 audio transcripts. A machine learning engineer studied greater than 100 hours of Whisper transcripts and located hallucinations in greater than half of them. The developer reported finding hallucinations in almost the entire 26,000 transcripts he created using Whisper.

An OpenAI spokesman said the corporate is “continuously working to improve the accuracy of our models, including reducing hallucinations,” and noted that its terms of use prohibit the usage of Whisper “in certain high-stakes decision-making contexts.”

“We thank the researchers for sharing their findings,” they said.

This article was originally published on : techcrunch.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version