Skeptical about AI in healthcare? Here's how some doctors and hospitals are using it

All of Cincinnati’s major hospital systems are using artificial intelligence, technology that most Americans are wary of.

Cincinnati’s TriHealth uses artificial intelligence, or AI, to help diagnose pulmonary embolism, stroke and breast cancer – conditions for which early detection can be lifesaving.

UC Health and St. Elizabeth Healthcare are using AI for detection and diagnosis. Christ Hospital uses AI to automate insurance and claims billing, while Bon Secours Mercy Health relies on AI to recruit and hire nurses.

Despite its widespread rollout in hospital systems, most Americans don’t trust this technology, according to a 2023 Pew Research survey. Less than 40% of Americans expected AI to improve patient health outcomes, the survey said.

Nationwide, insurance companies have been sued over faulty and allegedly discriminatory algorithms. Doctors have been criticized for using ChatGPT to write up medical records and potentially exposing sensitive patient information by doing so.

Hospital executives say that hospitals are using artificial intelligence, which the National Institutes of Health define as machines learning to perform tasks, to increase efficiency and elevate the standard of care provided to patients.

More drone deliveries, new AI tech: Here's a guide to what Walmart unveiled at CES 2024

“What people don't realize is AI has been around for a very long time, starting back in the 1950s," said Paul Grone, chief information officer of Christ Hospital. "It’s evolved from many years ago. Health care has been using AI in the back office for quite some time.”  

Cincinnati hospitals say AI can help doctors

The Christ Hospital in Mount Auburn.
The Christ Hospital in Mount Auburn.

Christ Hospital is partnering with Microsoft and Epic Systems, the medical records software company that runs MyChart, to develop AI that helps doctors respond to patient emails.

Grone said he doesn’t think AI will result in less face-to-face time between patients and doctors, citing AI technology that records medical notes during appointments.

“Normally in the appointment, the provider would be on the computer the whole time as he or she’s talking to you,” he said. “Now, they’re facing you ... and the system is capturing the conversation. So, actually, it improves the face time with the patient.”

He said Christ Hospital aims to pilot the technology starting in February.

TriHealth’s Chief Operating Officer Terri Hanlon-Bremer shared similar sentiments about AI improving the patient experience. “It helps us pinpoint where that doctor should focus ... in an effective and efficient manner,” she said.

Hanlon-Bremer said the AI would be an aid, rather than a substitute, for doctors. “AI is not replacing the role of the physician or the clinical decision-making that a physician brings to the table,” she said.

TriHealth’s four-hospital system is also considering implementing a ChatGPT-like system that will help doctors respond to patient questions, according to John Ward, TriHealth’s senior vice president of regional operations.

“One of the tough things for physicians today with electronic medical records and with patient portals is that they get bombarded with a ton of messages,” Ward said. "So being able to process those and respond to those is difficult. It ends up taking hours at night.”

He said AI can help doctors prioritize those messages to save time.

Unlike ChatGPT, however, which was briefly banned in Italy for collecting data without consent, any data collected by hospitals is subject to HIPAA, the federal law that prohibits healthcare providers from sharing or selling a patient’s health information.

“If you're going to share data of any kind, it has to be totally de-identified,” Ward said.

Part of the task that hospitals face is properly vetting AI vendors. As TriHealth’s Hanlon-Bremer remarked, “The challenge we have is how to find a company that is credible, that has technology that is going to better our clinical outcomes, and that isn’t going to go away overnight.”

Meanwhile, Columbus-based AI startup Olive shut down suddenly in November 2023, after promising to use AI to increase efficiency in 600+ hospitals across the US. TriHealth had previously partnered with the now-defunct startup to automate medical billing and process denials.

Good Samaritan Hospital at Clifton and Dixmyth avenues in the CUF neighborhood.
Good Samaritan Hospital at Clifton and Dixmyth avenues in the CUF neighborhood.

Most Americans skeptical about AI’s benefits

Most Americans do not share hospital executives’ enthusiasm about the potential of AI.

In the Pew Research Survey, 75% thought healthcare providers would adopt AI technologies too quickly, before fully accounting for the risks to patients, and 79% of Americans said they did not want an AI chatbot to respond if they needed mental health support.

In May 2023, reports emerged that an AI-driven chatbot designed to help those struggling with eating disorders ended up offering users tips on dieting instead. The chatbot’s host, the National Eating Disorders Association, took it down shortly thereafter.

Implementing AI into medical billing has also met its challenges.

Insurance company Cigna was sued twice in 2023 over allegations that it relied on AI to deny thousands of pre-approved medical claims at a time. With the help of algorithms, Cigna employees took 1.2 seconds on average to reject each claim, according to a class action suit. Plaintiffs said that Cigna violated a California law that obliges insurers to evaluate claims in a “thorough, fair, and objective” manner.

Similarly, UnitedHealth Group was hit with a proposed class action lawsuit arguing that its AI algorithm methodically rejected elderly patients’ claims for care, such as stays in nursing facilities.

Biden, doctors call for more AI regulation

The privacy and ethics concerns that come with algorithms trained on large swaths of personal data have doctors and elected officials alike calling for more patient protections.

In an October executive order, President Joe Biden called on Congress to pass data privacy legislation, referring to AI as holding “extraordinary potential for both promise and peril.”

Earlier in the year, the American Psychiatric Association issued a statement strongly opposing doctors entering patient data into generative AI tools like ChatGPT, citing probable violations of HIPAA.

Generative AI tools for healthcare have not yet been approved by the Food and Drug Administration. However, Dr. Douglas Flora, the executive medical director of oncology services at St. Elizabeth Healthcare, thinks it’s only a matter of time.

“Looking three to five years down the road, I don’t think that a health care system that hasn’t employed generative AI is going to be able to compete with those that have," Flora said.

This article originally appeared on Cincinnati Enquirer: Artificial intelligence in healthcare: How hospitals are using AI