NC researcher: I’m excited about AI in healthcare - with guardrails | Opinion

In March, a group of artificial intelligence experts published an open letter warning of serious dangers posed by these technologies and arguing for a six-month “pause” in development for some kinds of AI. At roughly the same time, Google’s Geoffrey Hinton, often dubbed the “godfather of AI,” stepped down from his post to raise the alarm about similar issues.

What exactly is going on? Why is AI — a technology that many of its own creators are worried about — suddenly everywhere? And what does that mean for us?

Michael Pencina
Michael Pencina

We’ve recently seen a big jump in the capabilities of these technologies, especially the kind known as generative AIs. Some publicly available generative AIs, such as OpenAI’s ChatGPT, can write an essay or answer questions on seemingly any topic — and can do so in a natural-sounding, conversational style.

These capabilities have delighted some, who see enormous possibilities for automating research or even generating new scientific insights. Others are worried that AI may threaten jobs and introduce bias and errors that can be hard to recognize. Whether you see AI as a threat, an opportunity, or some combination of both, one thing is certain: the field is changing so fast that even experts have trouble keeping pace.

As a researcher who works with AI in the field of healthcare, I share in the enormous interest for using AI to improve patient care. But I also understand why many people have concerns.

Unlike some fields, research on medical AI tends to move cautiously because the stakes are so high. If an algorithm for marketing shoes makes a mistake, a product might not sell; if a medical AI recommends the wrong dosage of a medication, a patient’s well-being is at risk. Powerful generative AIs raise these stakes further because they often work in ways that are hard to directly explain or evaluate.

At the same time, the potential for AI to help patients and healthcare providers is real. AI technologies can speed up burdensome, time-consuming processes and make them more efficient. This is a particularly urgent need in medicine, where enormous amounts of time and resources are expended not on patient care, but on dealing with paperwork and bureaucratic processes.

An AI that could help manage this workload would free healthcare providers to spend more time on meaningful interactions with patients and families. Other types of AI could help predict how patients might respond to therapies, or even suggest possible new treatments.

Here are some ways I believe we can achieve these benefits while protecting patients from harm:

Ensure that AI technology serves humans, rather than taking over their responsibilities or replacing them. No matter how good an AI is, at some level, humans must be in charge.

Define the task we want the AI to accomplish. The best way to do this is to start with the easiest problems. For instance, using AI to generate a summary of information for a patient would be less risky than using one to generate a diagnosis.

Describe what the successful use of an AI tool looks like. A key part of this process is thinking through all possible consequences of using the technology, good or bad.

Create transparent systems for continuously testing and monitoring AI tools at every step, from creation to use in clinics. We also need clear paths for taking action when AIs show signs of poor performance, errors, or bias.

Artificial intelligence offers possibilities that test the limits of our imagination. For many, these may exceed the boundaries of what is comfortable or even desirable, as the current call for a pause on AI development demonstrates.

Perhaps it would be better to ask: What guardrails do we need to ensure that AI tools are trustworthy, equitable, and work for human benefit? If we can incorporate the steps listed above, we’ll be making a good start toward ensuring that AI serves our common interests — in healthcare and beyond.

Michael Pencina, Ph.D., is director of Duke AI Health, which is part of the Duke University School of Medicine. The views expressed are his own.