What AI can do in healthcare—and what it should never do

Daniel Yang is vice president of AI and emerging technologies at Kaiser Permanente. (Photo: Kelsey McClellan for WSJ)
Daniel Yang is vice president of AI and emerging technologies at Kaiser Permanente. (Photo: Kelsey McClellan for WSJ)

Summary

  • Kaiser Permanente AI chief Daniel Yang is on the front line of deploying artificial intelligence as well as testing its bounds. “We did identify some hallucinations.”

Dr. Daniel Yang, a top executive overseeing artificial intelligence at Kaiser Permanente, is driving new uses for AI in healthcare. He is also learning what it shouldn’t do.

Yang, an internal-medicine physician, arrived last year at the California-based health system, which brings together health insurance, hospitals and physicians and has more than 12 million members. Before, he was at the Gordon and Betty Moore Foundation, where he was a director of the patient-care program focused on diagnostic efforts. He also sees patients at a Department of Veterans Affairs hospital.

Now, as vice president of artificial intelligence and emerging technologies at Kaiser Permanente, he is dealing with some of the biggest questions around how far and how fast to go with these new tools. He spoke with The Wall Street Journal about the possibilities and challenges for AI.

What are some of the ways Kaiser Permanente is using AI?

One of the most exciting areas for AI and technology is in supporting our clinicians to manage some of the administrative burden that they face on a day-to-day basis. Kaiser Permanente, to the best of my knowledge, is deploying the largest implementation of the clinical AI scribe technology in the country. The purpose is to generate a first draft of the clinical note from a recording of the patient’s encounter.

What has been the biggest failure with AI?

You’ve got all these eager AI developers that get their hands on a data set and ask themselves, what AI tool can I develop from this data set, instead of starting with the problem and then finding the data and developing the tool. So that’s one big issue that we’re seeing.

And two is, I think a lot of people fail to realize that developing the algorithm is really the easy part. The part that really takes work, and the part that adds value, is redesigning the workflow to accommodate the AI tool. There are a lot of great solutions on paper, but the healthcare systems may not have the expertise or the interest to really redesign the workflow to maximize the benefit from that AI tool.

What’s been your biggest surprise?

A lot of physicians felt threatened by AI, particularly diagnostic AI. They thought it was infringing upon their sense of professional competence and autonomy. And so within that backdrop, what really surprised me in the last 18 months, with the introduction of generative AI, was seeing this completely rapid shift in physician attitudes toward AI, from one of reluctance and skepticism to one of very deep excitement, delight and now demand.

What are you most afraid of with AI in healthcare?

I don’t think that the enthusiasm around developing tools has been met with the same level of enthusiasm around testing, validating and demonstrating the safety and effectiveness of these tools. There’s a tremendous amount of effort in the development space, and I feel like the infrastructure to support responsible AI has not yet been able to catch up.

I worry about a two-tiered system of AI. The AI “haves" are going to be large, well-resourced systems like Kaiser Permanente that will put the time and energy into testing, evaluating and responsibly deploying AI technologies to the benefit of our members, and the AI “have-nots" will be health systems like county health systems, federally qualified health centers, rural hospitals that either don’t have the infrastructure or know-how to deploy the AI technologies, or they deploy without fully understanding how they work and their limitations.

What should AI never do? Is there a red line?

We view them as augmenting, so I wouldn’t feel comfortable about AI automating clinical decision-making, in diagnosis or treatment.

Are there specific problems right now that you’re looking to find AI tools to solve?

Definitely tools that support our clinicians to be more effective and efficient, to reduce some of the administrative workloads. Another area that we’re exploring and that we’ve been actively working on is how do we better manage this dramatic increase in secure messages that patients are sending our providers.

If we look five years into the future, what do you think AI will be doing in healthcare?

We often treat people as a member of a larger population. I think there’s a future state in which we really are moving toward much more personalized care, where the entirety of not just our medical record, but our activities, the foods we eat, are really being used and leveraged to inform diagnoses and interventions that are customized to the individual.

What about the black-box element of AI? What can be done about that worry?

There are things that we do, that we use every single day in medicine, that we don’t fully understand how it works. Let me give you the example of Tylenol. What is the mechanism of action of Tylenol? We don’t fully understand. Or general anesthesia. We don’t fully understand the mechanism, how they work. It’s pretty remarkable, right? People use it every day, because they know it’s safe and they know it’s effective, right? And so explainability, understanding the mechanism of actions, are really a proxy for trust in a tool. So while we are working on explainable AI, I think what we have to realize is transparency is just a proxy for trustworthiness. And what people really want at the end of the day, they want trustworthiness.

We read these things about AI hallucinating, which sounds very frightening in any setting, and particularly in medicine. Is that a concern?

There are a lot of technology approaches to identifying and flagging and removing these inaccuracies or hallucinations before they ever show up in the first draft of a clinical note. But it’s also the reason we make sure that our clinicians are reviewing every note.

We did identify some hallucinations in these drafts of the clinical notes. We were finding that they were oftentimes in the plan section. We were hearing things like, it would write in the note, come back and see me in two weeks. But the doctor said, I never said that. It’s not an unreasonable thing, but it’s not something that we talked about in the visit.

AI is really just a mimicker of the data that it’s trained on. It’s extracting that somehow deeply from both the direct context it has as well as the training. The “aha!" moment for me is that, at Kaiser Permanente we don’t make our revenues by driving more visits [unlike others that operate on a fee-for-service model]. If anything, we want to resolve people’s clinical complaints as quickly as possible and as efficiently as possible.

Interview has been edited and condensed. Write to Anna Wilde Mathews at Anna.Mathews@wsj.com

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

MINT SPECIALS