AI and health care: Should we be worried?

Experts break down the benefits and the dangers of artificial intelligence in health care — and propose how we could mitigate any potential harm.

Photo illustration
Photo illustration: Daniel Zender for Yahoo News

A Pew Research poll found that 6 in 10 U.S. adults would feel uncomfortable if their own health care provider relied on artificial intelligence (AI) to diagnose disease and recommend treatments. But the reality is that AI has entered the health and wellness space, with some doctors already harnessing its power and potential.

Yahoo News spoke with Marzyeh Ghassemi, an assistant professor at MIT’s Institute for Medical Engineering and Science, and James Zou, an assistant professor of biomedical data science at Stanford University, to learn more about the intersection of AI and health care — what’s currently possible, what’s on the horizon and what the downsides could be.

Some doctors are using AI chatbots to help deliver bad news to patients, report says >>>

What’s currently possible?

Here are some examples of what AI can do right now.

  • Make diagnoses and assessments. “[There are] over 500 medical AI algorithms and devices that have been approved by the FDA in the U.S. that can be used on patients now. And a lot of these algorithms are basically helping clinicians to make better diagnoses, better assessments of the patients,” Zou said. By using AI to help do tasks like evaluating medical images, clinicians are able to cut out some of the more labor-intensive manual work.

  • Make prognoses. While many current AI models focus on helping diagnose patients, Ghassemi said she’s also seen some models being developed that can help predict the progression of a disease or development of possible complications from a disease.

  • Simplify medical information for patients. “A lot of the medical terminology and concepts can be pretty complicated,” Zou said. “One of the projects that we've done is to use ChatGPT to basically take the medical consent forms, which are horribly difficult to read, and then simplify it so that somebody at the eighth-grade reading level will be able to read it.”

Artificial intelligence could lead to faster, more accurate heart attack diagnosis — and other health news you may have missed >>>

What could AI do in the future?

And there may be even more uses on the horizon. Here’s what AI might do in the future.

  • Organize health care data. Zou said a big challenge is that data from different hospitals, including electronic health records, can’t be exchanged easily. AI could help with that. “If you're a patient and you go to different hospitals, often the hospitals do not really talk to each other very well. And this is one area where these AI algorithms, potentially the language models, could make it much easier.”

  • Predict bad outcomes. AI could also help identify patients who are at risk so that they get the care they need early on — which could help combat the maternal morbidity and mortality rate in the U.S. “My ideal setting would be if we had risk scores that accurately predicted potential poor outcomes for women. Then we could perhaps alert care teams that they are making poor choices about women's health care, or we could target additional resourcing to pregnant women when they might need it the most,” said Ghassemi.

  • Improve treatment response predictions. For chronic conditions such as depression, in many cases a clinician may have to make “an educated guess” as to which drug or treatment would work best for an individual patient. Ghassemi said AI could help clinicians make better decisions by factoring in constraints such as body weight or sex, which can affect how a patient metabolizes certain drugs.

  • Develop new drugs. “There's this whole pipeline where at early stages AI can be used to help us discover new drugs, new molecules, new antibiotics,” Zou said.

Will AI soon be as smart as — or smarter than — humans? >>>

The scary side of AI in health care

“The danger, I think, is not that it becomes a killer robot and comes for you. The danger is that it replicates the poor care that you are already receiving right now, or exacerbates it,” Ghassemi said.

“We're essentially training machine learning systems to do as we do — not as we think we do or hope we would do. And in health care, what happens is, if you train machine learning models naively to do what we currently do, you get models that work much, much more poorly for women and for minorities.”

One AI-driven device, for example, overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of hypoxia (oxygen deficiency). A 2019 study found that an algorithm used to predict health care needs for more than 100 million people was biased against Black patients. “The algorithm relied on health care spending to predict future health needs. But with less access to care historically, Black patients often spent less. As a result, Black patients had to be much sicker to be recommended for extra care under the algorithm,” NPR reported.

The National Eating Disorders Association also made headlines after its new AI chatbot, Tessa, advised users to count calories and measure body fat — forcing the organization to remove the chatbot months after laying off its human phone line employees.

“I think the problem is when you try to naively replace humans with AI in health care settings, you get really poor results,” Ghassemi said. “You should be looking at it as an augmentation tool, not as a replacement tool.”

Key medical devices are failing to diagnose Black patients accurately, research shows >>>

How can we mitigate the potential harms of AI in health?

Tech industry leaders released a one-sentence statement in May saying that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." In the health care space, Ghassemi and Zou made a few suggestions for steps that could be taken to reduce possible harms posed by AI.

  • Be transparent. Zou said a big first step would be more openness about what data is being used to train AI models, such as chatbots, and how those models are evaluated.

  • Carefully evaluate algorithms before letting patients interact with them. There’s already a risk of patients getting misinformation online, but if hundreds of thousands of patients are coming to a single source such as a chatbot that has issues, the risk is even greater, Zou said.

  • Keep AI systems up to date. “You need a plan for keeping the AI system up to date and relevant with current medical advice, because medical advice changes,” Ghassemi said. “If you have a model go stale and start to give incorrect recommendations to doctors, that could also lead to patient harm.”

  • Establish regulations. Ghassemi suggested that the Department of Health and Human Services Office for Civil Rights could play a role. “They could enforce this line that prohibits discrimination in a health care context and say, ‘Hey, that applies to algorithms too.’”