[ad_1]
Software engineers, developers and academic researchers have serious concerns about copying OpenAI’s Whisper, according to Associated Press report.
Although there’s no shortage of discussion about generative AI’s tendency to hallucinate — basically, make things up — it’s a bit surprising that this is an issue with transcription, where you’d expect text to closely follow the audio being transcribed.
Instead, researchers told the AP, Whisper inserted everything from racist comments to imagined medical treatments into the texts. This could be particularly disastrous given Whisper’s adoption in hospitals and other medical contexts.
A University of Michigan researcher studying public meetings found hallucinations in eight out of 10 audio transcripts. One machine learning engineer studied more than 100 hours of Whisper transcriptions, and found hallucinations in more than half of them. One developer reported that he found hallucinations in all of the approximately 26,000 copies he created using Whisper.
An OpenAI spokesperson said the company is “continuously working to improve the accuracy of our models, including reducing hallucinations” and noted that its usage policies prohibit the use of Whisper “in certain high-stakes decision-making contexts.”
“We thank the researchers for sharing their results,” they said.
[ad_2]