In a paper published July 31, researchers at DeepMind shared that their AI algorithms can find possibly fatal kidney injuries 48 hours before doctors. Earlier this year, New Scientist reported that researchers had trained AI to diagnose children’s illnesses with up to 97% accuracy.

But when it comes to using AI for mental health, psychiatrists are a lot less bullish.

Will AI replace doctors?

Last month, researchers at Duke University and Harvard Medical School released the results from their survey of 791 psychiatrists across 22 countries. Those researchers partnered with Sermo, an online social network for physicians, to ask psychiatrists how likely AI is to replace them:

  • 4% of psychiatrists felt that future technology would make their jobs obsolete
  • 17% believed technology is likely to replace a human in providing empathetic care
  • 46% were uncertain that the benefits of AI/ML would outweigh the risks

The study found just two tasks that most psychiatrists think they’ll outsource to AI:

  • Providing patient documentation such as updating medical records (75%)
  • Synthesizing patient information to reach diagnoses (54%)

We should point out this qualm’s likely culprit: The question itself. Sermo’s survey asked psychiatrists how likely they believe AI is to replace them. This question differs greatly from asking doctors if they use AI to assist them.

Man vs. Machine

Per the examples above, AI is now advanced enough that it can outperform doctors at rules-based, repetitive tasks. These include analyzing millions of records to spot patterns that can lead to diagnoses. What it can’t do is replace doctors altogether—especially not in a field like psychiatry, which focuses on mental and behavioral health.

To explain why, GetApp spoke with Dr. Murali Doraiswamy, professor in the departments of Psychiatry and Medicine at Duke and this study’s co-author. Read on to learn:

  • Why psychiatrists don’t believe AI will likely replace them altogether
  • If examples of AI outperforming psychiatrists exist
  • How to include AI in doctors’ work training
  • When to include doctors in the AI product design process

GetApp: Your study’s results found that most psychiatrists don’t believe AI will make their jobs obsolete or replace their role providing empathetic care. How does this group of doctors’ pessimism compare with their peers in other medical fields where AI has outperformed doctors in the time and accuracy of diagnoses?

Dr. Murali Doraiswamy: No study has directly compared the views of different specialties with regards to AI. Optimism about the utility of AI among all groups (doctors, data scientists, technologists) is likely higher for image analyses or pattern-detection based specialties such as radiologists and pathologists.

However, psychiatry involves a much greater integration of medical factors with social, psychological and cultural factors and establishing a human therapeutic alliance—which may be why psychiatrists in the survey felt pessimistic. It’s also important to note that the studies where AI outperforms doctors have all been in controlled settings.

GA: Have you seen successful examples of AI outperforming psychiatrists at essential job tasks? If so, describe the results.

MD: Yes, but only in controlled settings. For example, we published a study a few years ago showing that AI could spot which at risk subjects would develop Alzheimer’s five years prior to development of symptoms.

The AI was able to put together 33 different pieces of clinical genetic laboratory results and scan data to make this prediction. Synthesizing this type of multidimensional data is hard for humans.

Likewise, a preliminary study has shown that people are more likely to disclose embarrassing details to a nonjudgmental AI-powered avatar than a human psychotherapist. So paradoxically AI may be better at reducing stigma than a human therapist.

Scientists have also created a machine learning (ML) program that can use hospital admission data to predict likelihood that someone will attempt suicide in the next week or in the next two years—both with over 80% accuracy. Humans are still suboptimal at predicting risk for suicide.

While these findings are promising, none have been yet fully validated for use in real world.

GA: Overall, how well-equipped do you think the psychiatry profession is to handle upcoming technological change? What do you foresee as the biggest opportunities and challenges?

MD: AI is going to be the “stethoscope” for the future, but most practicing doctors were never taught about AI/ML in medical school or residency so there is a huge need for re-skilling. We will need AI courses specially designed for doctors and it should be required part of future continuing medical education.

There is also a need for doctors to enhance their uniquely human skills —[like] empathy, cultural sensitivity—skills that AI is unlikely to replace anytime. Those who possess both skill sets will ultimately be more desired by patients.

GA: In your assessment of the study’s results, you warned that AI’s ethical concerns “should be a high priority for research since even a single line of bad code could have serious repercussions.” What’s the most realistic way to prevent this?

AI and mobile digital technologies are a double-edged sword. We need to recognize this is a work in progress and put in place governance systems to ensure the technology is used in a fair, empathetic and evidence-based manner to minimize known and future emergent risks. I coauthored a recent report on this.

GA: How would you like to see psychiatrists included in the development lifecycle for the AI-powered products that they’ll use? How much of a say should they have over the software and other AI tools that they’ll incorporate into their work?

MD: If we don’t understand how end users think and work, even the smartest technology can fail. AI/ML must be designed with the user in mind, which is why surveys like this are so important.

Psychiatrists and patients should be involved right from the start. This is one reason that 50% of AI startups fail. Also why medicine is so hard to disrupt.


How do other healthcare professionals use AI?

If you’re interested in learning more about the promise and pain points of using AI in healthcare, check out research from GetApp.

Share This

Share this post with your friends!