On Saturday, an Associated Press investigation revealed that OpenAI's Whisper transcription tool creates fabricated text in medical and business settings despite warnings against such use. The AP interviewed more than 12 software engineers, developers, and researchers who found the model regularly invents text that speakers never said, a phenomenon often called a "confabulation" or "hallucination" in the AI field.
Upon its release in 2022, OpenAI claimed that Whisper approached "human level robustness" in audio transcription accuracy. However, a University of Michigan researcher told the AP that Whisper created false text in 80 percent of public meeting transcripts examined. Another developer, unnamed in the AP report, claimed to have found invented content in almost all of his 26,000 test transcriptions.
Of course, they use it because it's cheaper than paying a human transcriber. So riddle me this, Healthcare Administrator: what do you call yet another AI that lies all the time? A day that ends in "-day".
And people have started noticing:
While the vast majority of people over 50 look for health information on the internet, a new poll shows 74% would have very little or no trust in such information if it were generated by artificial intelligence.
Meanwhile, 20% of older adults have little or no confidence that they could spot misinformation about a health topic if they came across it.
That percentage was even higher among older adults who say their mental health, physical health or memory is fair or poor, and among those who report having a disability that limits their activities. In other words, those who might need trustworthy health information the most were more likely to say they had little or no confidence they could spot false information.
People are smart enough to catch a whiff of marketing Bravo Sierra.
From now on I will start asking all of my healthcare providers if they do transcription, and if so whether they use AI for the transcription. If they do I will demand to review the transcript. If they won't, I'll get a different provider.
Gack!
ReplyDeleteThe problem with artificial intelligence is too many people trust it. Instead of an interesting little potential application getting tested over and over again by people who know what they're doing, it has been dropped to the general population with all sorts of hoopla about how wonderful it is without being tested anywhere nearly enough.
As the old saying goes, "... it only does do what I tell it."
The more I see of this sort of AI, the less I want to have anything to do with it.
ReplyDeleteNot a bad idea at all!
ReplyDeleteWhen I was in the business back in the day.... Started out with real live tape recorders and transcriptionists. We'd take a look at the charts the next day. Special attention to anything complicated, uncommon or mediolegally touchy. When I made my 7th inning career switch it was all medical software. My typing skills from long ago came in really handy. Too many docs just used templates, and they encourage sloppy work. There was text recognition software too. Dragonspeech as I remember one such being. It was ok, but the longer you used it, and made corrections, the better it got. Much like a real transcriptionist. AI doing it and presumably filling in the blanks? Seems like a liability nightmare. The temptation to cook in a bit of "upcoding" to help the loyal customer get more money would be significant, even without the possibility of other nonsense creeping in.
ReplyDeleteHealthcare has been using error prone Dragon for transcription for years. Because it's cheaper, not because it's accurate. The impetus to cut costs wherever possible regardless of consequences already makes it difficult to insure proper care. And it's not going to improve any time soon.
ReplyDeleteCurrent AI is artificial but it's NOT intelligent...nor sentient....yet. When/if that happens I expect whatever develops to become malevolent quickly.
A few lawsuits will solve part of the problem. Too late for some though.
ReplyDeleteAs a retired RN, I preform chart audits for law firms in possible malpractice and ethics cases and I am seeing a lot of AI transcription errors and omissions and goobly gook words. I stridently point out these chart errors and that fact an AI transcription was preformed. Judges and juries do not like this type of charting and it's broader implications. A medical chart is a legal document and has to be factual and accurate and there is a accepted way to correct an error and AI is not it.
ReplyDelete