Bot Beats Docs at Empathy

In a fun but practical study, AI-generated responses to patient questions consistently outperformed those from human physicians

What’s the Claim?

In a fun, informative, and practical trial, ChatGPT went head-to-CPU with human physicians to answer a random sampling of real patients’ medical questions. Responses were graded for quality and empathy by healthcare professionals from a variety of backgrounds who did not know whether each response came from a human or the bot.

The computer blew away the trained physicians, not just in response quality but also in response empathy.

  • Blinded assessors from three different specialties preferred ChatGPT’s replies to those of the human nearly 80% of the time.
  • The bot’s responses were 3.6 times more likely to be rated of good or very good quality.
  • The prevalence of responses deemed unacceptable was some tenfold higher among the human-generated answers.
  • The chatbot’s replies also were 9.8 times more likely to be rated “empathetic” or “very empathetic” than were those from human doctors.

How’s It Stack Up?

This is a one-off, as far as I know, and it grabs one’s attention.

What’s Our Take?

It’s too soon to start fretting about being sent to the mines to dig petroleum for our four-wheeled AI-enabled overlords, and it’s more fun to think about how we can use this eye-opening study to help us practice better. I got a few things from this one:

  • We don’t have to write long, but we need to be more thoughtful. Longer responses from physicians scored higher in this study, but still not as high as responses from the chatbot. Short but thoughtful responses are fine, as long as each response conveys care and empathy in some explicit way (“Thanks for your note. I’m sorry that you are hurting. Let’s try . . .”).
  • We should get more creative in how we use interactive computing in our practices. For example, a recent Efficient Practice post here at CORRelations shared how automated texts after hospital discharge resulted in more patient contact, less staff work, and fewer readmissions. Worth a look.
  • There will be a role for computer-based and AI-assisted tools in our practices, sooner rather than later. While none of us would let a chatbot respond unsupervised to patients on our behalf, there is little question that we’ll be using AI-assisted approaches to patient queries very, very soon. We’ve known that the bots are fast and efficient; we learned here that response quality and empathy are more than adequate. The uncertainty, of course, is quality control. It only takes one bad mistake to cause real harm. Still, it’s easy to imagine chatbot-generated drafts that a nurse or doctor then could edit for real-world use.

And from the looks of things here, we could learn a bit about human communication from this soulless tool. Sigh.

Source

Ayers JW, Poliak A, Dredze M, et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern Med. 2023;183:589-596.