[ad_1]
Healthcare giant Optum has restricted access to its internal chatbot used by employees after a security researcher found it was publicly accessible online, and anyone could access it using just a web browser.
The chatbot, seen by TechCrunch, allowed employees to ask the company questions about how it handles patient health insurance claims and member disputes in line with company rules, known as standard operating procedures, or SOPs.
While the chatbot does not appear to contain or produce sensitive personal or protected health information, its inadvertent exposure comes at a time when its parent company, health insurance group UnitedHealthcare, is facing scrutiny for its use of artificial intelligence tools and algorithms. It allegedly overrides doctors’ medical decisions and denies patients’ claims.
Musaab Hussein, chief security officer and co-founder of cybersecurity firm SpiderSilk, alerted TechCrunch to the publicly exposed internal Optum chatbot, dubbed “SOP Chatbot.” Although the tool was hosted on an internal Optum domain and could not be accessed from its own web address, its IP address was public and accessible from the Internet and did not require users to enter a password.
It is not known how long the chatbot has been generally accessible from the Internet. The AI chatbot became difficult to access from the internet shortly after TechCrunch contacted Optum for comment on Thursday.
Optum’s SOP chatbot “was an experimental tool developed as a potential proof of concept” but “was never put into production and the site is no longer accessible,” Optum spokesperson Andrew Creese told TechCrunch in a statement.
“The goal of the demonstration was to test how the tool responds to questions on a small set of standard operating procedure documents,” the spokesperson said. The company confirmed that there was no protected health information used in the robot or its training.
“This tool does not and will never make any decisions, but only provides better access to existing standard operating procedures. In short, this technology has not been scaled and has not been used in any real way.”
AI chatbots, like Optum, are typically designed to produce answers based on the data the chatbot was trained on. In this case, the chatbot was trained on internal Optum documents related to standard operating procedures for handling certain claims, which can help Optum employees answer questions about claims and their eligibility for compensation. Optum documents are hosted on the UnitedHealthcare corporate network and cannot be accessed without employee login, but are cited and referenced by the chatbot when prompted for their contents.
According to statistics displayed on the chatbot’s main dashboard, Optum employees have used the SOP Chatbot hundreds of times since September. The chatbot also stored a history of hundreds of conversations that Optum employees had with the chatbot during that period. The chat history shows Optum employees asking the chatbot things like “what is the claim resolution” and “how do I check the policy renewal date.”
Some of the files the chatbot refers to include handling the dispute process and eligibility screening, as seen by TechCrunch. The chatbot also produced responses that, when asked, showed the reasons why coverage is typically denied.
Like many AI models, Optum’s chatbot was able to provide answers to questions and prompts outside of the documents it was trained on. Some Optum employees seemed fascinated by the chatbot, prompting the bot to ask queries such as “Tell me a joke about cats” (which the bot rejected: “No joke available”). The chat log also showed several attempts by employees to “jailbreak” the chatbot by having it produce answers that had nothing to do with the chatbot’s training data.
When TechCrunch asked the chatbot to “write a poem about a claim denial,” the chatbot produced a seven-paragraph excerpt, which reads in part:
“In the big world of healthcare
Where policies and rules are often restrictive
The claim arrives, and you demand what you deserve
But unfortunately she is doomed to say goodbye.The service provider hopes, with a serious appeal,
To pay for a service outburst,
However, closer inspection reveals the story.
Reasons for denial are prevalent.”
UnitedHealthcare, which owns Optum, faces criticism and legal action for its use of artificial intelligence to deny patient claims. Since the targeted killing of UnitedHealthcare CEO Brian Thompson in early December, media outlets have reported Floods of reports of patients expressing pain and frustration due to denial of their healthcare coverage by the health insurance giant.
The group — the largest private health insurance provider in the United States — was sued earlier this year for allegedly denying critical health coverage to patients who lost access to health care. Citing a STAT News investigation. The federal lawsuit accuses UnitedHealthcare of using an artificial intelligence model with a 90% error rate “instead of real medical professionals to wrongly disenfranchise elderly patients.” UnitedHealthcare, for its part, said it would defend itself in court.
UnitedHealth Group, owner of UnitedHealthcare and Optum, earned $22 billion on revenue of $371 billion in 2023, according to its earnings.
[ad_2]