April 29, 2024

UnityPoint Health begins AI research

Greater Regional Health Board of Directors was informed during its February meeting UnityPoint Health, who Greater Regional is affiliated, has begun researching how to include AI into health care.

The board was told UnityPoint’s Benjamin Cleveland is a principal/analytical data scientist. Cleveland’s role expanded to include AI Strategist, the first for UnityPoint Health.

After the meeting, Cleveland provided the following overview of AI, specifically in healthcare and how UnityPoint is approaching AI. There was no reference during Greater Regional’s meeting if, how or when AI will be used in Creston.

What is Artificial Intelligence (AI)?

AI broadly is any computer system that uses data to mimic human intelligence in some way, including things like audio processing into text, making recommendations, and image recognition. AI is already all around us and integrated into our lives providing better experiences or making daily living easier and more efficient. Most useful applications of AI take care of simple tasks humans could do, but ideally wouldn’t need to so they can focus on more complex things.

When Netflix suggests a new show for you or Amazon recommends a new product to purchase, AI algorithms are using what you’ve viewed or purchased in the past to power these suggestions. While you browse, in the background your robot vacuum could be cleaning the room. Consider “Autocorrect” when writing a text. You could manually find a dictionary and look up the word you are trying to spell, but AI can automatically suggest the right spelling based on what it think you are trying to write so you don’t have to.

How is AI used in healthcare, today and in the future?

“There are several ways AI is being integrated into healthcare delivery to improve patient health, provider wellcbeing and enhance the care experience. Similar to how weather is forecasted, AI is used in healthcare to improve patient health by alerting care teams of patients at risk for certain harms ahead of time so they can intervene sooner. We use AI to facilitate more efficient clinic and hospital operations. We are also optimistic the technology behind ChatGPT has the potential to reduce the administrative burden of care delivery by allowing our providers to spend less time typing on their computer and more time face-to-face with their patients,” Cleveland said.

How are the risks of AI being mitigated?

“While we are excited about the potential value AI may bring to healthcare, a high degree of diligence and governance is required with any new technology to understand and mitigate risks it can create. We have strict vendor security and AI algorithm review protocols in place to assess AI technology and ensure data security and privacy before it enters our organization. Small scale pilots are conducted so we can validate performance and develop processes to educate and govern it’s use before deploying broadly where it will be continually monitored,” according to Cleveland.

AI has already been implemented in various health care capacities.

New AI tools are helping doctors communicate with their patients, some by answering messages and others by taking notes during exams. It’s been 15 months since OpenAI released ChatGPT. Already thousands of doctors are using similar products based on large language models. One company says its tool works in 14 languages.

AI saves doctors time and prevents burnout, enthusiasts say. It also shakes up the doctor-patient relationship, raising questions of trust, transparency, privacy and the future of human connection.

A look at how new AI tools affect patients:

IS MY DOCTOR USING AI?

In recent years, medical devices with machine learning have been doing things like reading mammograms, diagnosing eye disease and detecting heart problems. What’s new is generative AI’s ability to respond to complex instructions by predicting language.

A check-up could be recorded by an AI-powered smartphone app that listens, documents and instantly organizes everything into a note the patient can read later. The tool also can mean more money for the doctor’s employer because it won’t forget details that legitimately could be billed to insurance.

A doctor should ask for consent before using the tool. The patient might also see some new wording in the forms patient’s sign at the doctor’s office.

Other AI tools could be helping a doctor draft a message, but the patient might never know it.

“Your physician might tell you that they’re using it, or they might not tell you,” said Cait DesRoches, director of OpenNotes, a Boston-based group working for transparent communication between doctors and patients. Some health systems encourage disclosure, and some don’t.

Doctors or nurses must approve the AI-generated messages before sending them. In one Colorado health system, such messages contain a sentence disclosing they were automatically generated. But doctors can delete that line.

“It sounded exactly like him. It was remarkable,” said patient Tom Detner, 70, of Denver, who recently received an AI-generated message that began: “Hello, Tom, I’m glad to hear that your neck pain is improving. It’s important to listen to your body.” The message ended with “Take care” and a disclosure that it had been automatically generated and edited by his doctor.

Detner said he was glad for the transparency. “Full disclosure is very important,” he said.

WILL AI MAKE MISTAKES?

Large language models can misinterpret input or even fabricate inaccurate responses, an effect called hallucination. The new tools have internal guardrails to try to prevent inaccuracies from reaching patients — or landing in electronic health records.

“You don’t want those fake things entering the clinical notes,” said Dr. Alistair Erskine, who leads digital innovations for Georgia-based Emory Healthcare, where hundreds of doctors are using a product from Abridge to document patient visits.

The tool runs the doctor-patient conversation across several large language models and eliminates weird ideas, Erskine said. “It’s a way of engineering out hallucinations.”

Ultimately, “the doctor is the most important guardrail,” said Abridge CEO Dr. Shiv Rao. As doctors review AI-generated notes, they can click on any word and listen to the specific segment of the patient’s visit to check accuracy.

In Buffalo, New York, a different AI tool misheard Dr. Lauren Bruckner when she told a teenage cancer patient it was a good thing she didn’t have an allergy to sulfa drugs. The AI-generated note said, “Allergies: Sulfa.”

The tool “totally misunderstood the conversation,” said Bruckner, chief medical information officer at Roswell Park Comprehensive Cancer Center. “That doesn’t happen often, but clearly that’s a problem.”

WHAT ABOUT THE HUMAN TOUCH?

AI tools can be prompted to be friendly, empathetic and informative.

But they can get carried away. In Colorado, a patient with a runny nose was alarmed to learn from an AI-generated message that the problem could be a brain fluid leak. (It wasn’t.) A nurse hadn’t proofread carefully and mistakenly sent the message.

“At times, it’s an astounding help and at times it’s of no help at all,” said Dr. C.T. Lin, who leads technology innovations at Colorado-based UC Health, where about 250 doctors and staff use a Microsoft AI tool to write the first draft of messages to patients. The messages are delivered through Epic’s patient portal.

The tool had to be taught about a new RSV vaccine because it was drafting messages saying there was no such thing. But with routine advice — like rest, ice, compression and elevation for an ankle sprain — “it’s beautiful for that,” Linn said.

Also on the plus side, doctors using AI are no longer tied to their computers during medical appointments. They can make eye contact with their patients because the AI tool records the exam.

The tool needs audible words, so doctors are learning to explain things aloud, said Dr. Robert Bart, chief medical information officer at Pittsburgh-based UPMC. A doctor might say: “I am currently examining the right elbow. It is quite swollen. It feels like there’s fluid in the right elbow.”

Talking through the exam for the benefit of the AI tool can also help patients understand what’s going on, Bart said. “I’ve been in an examination where you hear the hemming and hawing while the physician is doing it. And I’m always wondering, ‘Well, what does that mean?’”

WHAT ABOUT PRIVACY?

U.S. law requires health care systems to get assurances from business associates that they will safeguard protected health information, and the companies could face investigation and fines from the Department of Health and Human Services if they mess up.

Doctors interviewed for this article said they feel confident in the data security of the new products and that the information will not be sold.

Information shared with the new tools is used to improve them, so that could add to the risk of a health care data breach.

Dr. Lance Owens is chief medical information officer at the University of Michigan Health-West, where 265 doctors, physician assistants and nurse practitioners are using a Microsoft tool to document patient exams. He believes patient data is being protected.

“When they tell us that our data is safe and secure and segregated, we believe that,” Owens said.

Associated Press contributed to this story.