fbpx Are robots going to replace doctors? | Faculty of Health Sciences | Queen's University | Faculty of Health Sciences | Queen's University Skip to main content
A robot

Are robots going to replace doctors?

Artificial intelligence seems to be everywhere today. There are always new stories about ground-breaking innovations, new developments, or massive investments in AI technology. At Queen’s, as we seek to find the scholars of the future, many departments across the university are attempting to find researchers who have skills in areas such as AI, machine learning, or data analytics.

Health care is no exception to the spread of AI. Many aspects of work in the field will be evolving as, increasingly, new technologies are integrated into care. Increasingly, these evolutions are happening in ways that many would not have anticipated.

A few weeks ago, for example, researchers at Western University published a study that suggests that brain imaging and artificial intelligence can be used to accurately diagnose subtypes of post-traumatic stress disorder. This finding is potentially highly significant because it could lead to a greater incorporation of AI and digital technology into Psychiatry.

If physicians are able to rely on AI to reliably interpret fMRI brain scans and make diagnoses of PTSD based on them, this innovation could make the medical system more efficient by helping patients access the care they need more quickly.

And this is not the first study that could impact Psychiatry in this way. In August of last year, researchers found that an algorithm could examine scans and differentiate patients affected by major depressive disorder from those affected by bipolar disorder with 92.4% accuracy. This algorithm could also predict how patients would respond to different medications.

These stories are just a tiny window into the massive technological developments that will reshape health care in the coming years. Artificial intelligence and digital technologies are going to gradually leave an impact on many different aspects of the ways in which patients receive treatment and health care professionals conduct their work.

To respond to this ongoing shift, the Royal College of Physicians and Surgeons of Canada has established a Task Force on Emerging Digital Technologies, and I am very honoured to be serving as its chair. Broadly speaking, this task force will formulate and provide recommendations to inform the Royal College’s strategy regarding the impact of emerging digital technologies on specialty medical education, training, and delivery of care. As digital technologies spread further and further into health care, the Royal College wants to be sure that specialty medical education is keeping pace with all the different changes, so that our future specialist physicians are fully prepared for the medical landscape they’ll be working in.

In my work on the task force so far, I’ve had many thought-provoking conversations about AI and have learned a great deal about current technological developments in health care.

From a strictly educational point of view, there are four central questions that those of us working in health care and medical education need to address in order to deal with some of the most pressing challenges that we will encounter in implementing AI on a large scale. Those questions are:


  1. Are healthcare practitioners going to be replaced by AI?
  2. What kinds of students should we now be accepting into our health care education programs?
  3. Will the increased use of AI in health care have implications for curricula in medical school and residency?
  4. What are the implementation challenges of adopting new technologies for practitioners?


So: are healthcare practitioners going to be replaced by AI? The short answer to this question is “no.” But that no comes with a major asterisk. AI may not replace healthcare providers, but it will absolutely force us to evolve.

Every specialty is going to look different. As a colorectal surgeon, I spent twenty-five years removing colon cancer from patients’ bodies. But it’s hard to imagine that in fifty years it will still be a human surgeon holding the knife as opposed to a robot guided by exquisite patient-specific images that have been generated by the next generation of scanning technology. And I also wouldn’t be surprised if that fifty years were actually twenty years.

While it is true that the AI will be better than human physicians at a wide range of things, this doesn’t mean that surgeons or any other specialists are going to disappear.

For all of the power and potential of AI, there are some things that it will just not be able to do. AI may be able to remove cancer from a colon, but it cannot show compassion or understand social cues or emotions. It will certainly not be able to do so better than human caregivers.

This is significant because – as all healthcare practitioners know – giving a diagnosis is never as simple as repeating some lines from a medical textbook. When you tell a husband and father that he has cancer, you’re not just telling him some information about treatment plans and life expectancy. You’re also watching the tear on his wife’s cheek and the concerned look on his daughter’s face. You’re figuring out how to give the diagnosis without crushing all of their spirits. You’re looking for the right words to say – the ones that will reassure everyone, that will let them know that you care and that you will be doing all that you can for them.

AI has come a long, long way, but it’s still nowhere capable of understanding emotions and showing compassion at that level. AI cannot squeeze a patient’s hand, or give them a hug. Only human caregivers can do that.

One of the major obstacles to integrating AI into healthcare, then, is also a major opportunity for traditional practitioners. AI cannot show meaningful compassion to patients, but it can give human caregivers something that the medical futurist Eric Topol calls “the gift of time.” In the past several decades, patients and practitioners alike have been complaining that doctors do not have enough time with their patients. As AI makes healthcare systems more efficient, doctors and nurses will possibly be able to devote more time and energy to tailoring our treatments to suit the needs of individual patients. And we will have more time to care for them holistically, rather than just treating their illnesses.

The second major question we’ll need to ask in medical education is: What kinds of students should we be accepting into our programs?

It is becoming increasingly clear that the doctors of the future will need to have strong digital literacy skills. And by digital literacy, I don’t just mean that they will need to know how to use their iPhones really well. The medical students of the future will have to have working knowledge of AI, robotics, and many other technological advances.

They will also need to be exceptionally adaptable and innovative individuals. In other words, they should be people with an entrepreneurial spirit. At the moment, we are still primarily assessing applicants on grades and standardized aptitude tests. We do try to look for inferential evidence that applicants are creative and innovative people with leadership skills, but it is not the essential construct by which we choose medical students. My colleagues and I in education leadership, then, are going to have to figure out soon how we’re going to assess applicants to our programs to make sure that they have the skills that the future physicians of the AI generation will need.

The next question is closely related to the previous one: Will the increased use of AI in healthcare have implications for curricula in medical school and residency? The answer to this, from my perspective, is decidedly yes. Curricula for medical students at both the undergraduate and postgraduate level will have to adapt. We will need to incorporate serious instruction on AI algorithms and data science into our curricula.

This is certainly a great opportunity for medical schools to evolve with the times, but it is also a challenge. Changing curricula can be a slow process, and schools need to find the right faculty to teach new skills and areas. And hiring new faculty members can also be a difficult and protracted process in its own right.

To be sure, medical school curricula will change to meet the new demands of the profession. But it will take leadership and persistence to make those changes, and they will not happen overnight.

The final question that I think those of us in healthcare education need to consider when thinking about AI is: What are the implementation challenges of adopting new technologies for practitioners?

If we’re going to bring more and more AI into healthcare, we’re obviously going to need to make sure that our practitioners know how to use it. This will create a number of interconnected challenges. How will we deal with the cost? How will we ensure that everyone feels comfortable in their new role? How will we make sure that everyone has access to retraining? How will we make sure that everyone wants to be retrained?

In his preliminary report on technology in healthcare for the NHS in Britain, Eric Topol touches on several of these concerns. He recommends that time for retraining should be incorporated into normal working hours for practitioners. He also makes it clear that the case will have to be made that these new technologies are actually going to be improvements over the current methods that they are replacing.

To help provide widely accessible and cost-effective training for practitioners, Topol believes that our idea of the classroom will need to be changed. We can no longer think of medical education as students in an auditorium watching a power point. We are rapidly evolving to the flipped classroom where digital delivery of information is the default. And increasingly what is being delivered is not a one-hour lecture but a series of 3-minute YouTube videos or podcasts. These developments can be used to train learners at all levels – from the undergraduate medical student to the longtime specialist who needs to be retrained.

The solutions that Topol recommends in Britain could be potential solutions in the Canadian system, or we might find others that work better for us. But it’s undeniable that the re-training of our healthcare workforce to be able to work with AI will be a central challenge to the implementation of new technology.

In summary, I believe that AI will be able to play a central role in improving healthcare. But it will need to be implemented with the guidance of physicians. And those of us in healthcare will need to address the four central questions about challenges to implementation that I have laid out here.

Do you have thoughts on the spread of AI into health care? Has it already started affecting your practice or your treatment in any way? I’d love to know your thoughts on the topic in the comments below, or better yet, please stop by the Macklem House, my door is always open.




Photo of robot by Franck V. on Unsplash

Lawrence Lynn

Thu, 02/14/2019 - 07:02

Excellent and thought provoking.
There is probably some urgency in the med ed field given the speed of AI development. Medical education is not nimble. I wonder if you have some ideas about how to induce the rapid change that is required.

Here I discuss patient safety issues associated with acute care AI. Patient safety is closely linked with the need for new focus of medical education.



Lawrence Lynn

Omar Islam, Radiology

Thu, 02/14/2019 - 14:56

Thank you Dean Reznick for the forward sounding article. The School of Computing and Department of Radiology touched on a few of these points in our recent 2-part “AI in Radiology” lecture series

The CAR (Canadian Association of Radiologists) has recently published a white paper on AI. It mentions the importance of AI education and the important role of universities, such as Queen’s, in spearheading the changes in healthcare that AI has and will bring. A quote from the paper: “Residency programs should integrate health informatics, computer science, and statistics courses in AI in their curriculum.”


Omar Islam, Radiology

Dr. Garry L Willard Meds'63

Sat, 02/16/2019 - 09:58

Fascinating commentary and view into the future.
But, overall, I think our profession needs less artificial intelligence and more common sense.

Dr. Garry L Willard Meds'63

Paul Rutherford

Sat, 02/16/2019 - 11:22

Intuition define & screen for that ability. The deficiency in AI, probably always will be.

Paul Rutherford

Aviv Shachak

Sat, 02/23/2019 - 13:09

One other point I would consider is that implementing new technologies often brings non-technical challenges, or have unintended consequences that are not always positive. AI is no exception. There are challenges such as over-dependence on the technology, changes to the patient-clinician relationships (will the changes be all positive as you imply here? I doubt it. There will probably be a mix of positive and negative impacts), bias in the algorithm, new type of errors (even if the end result is better than what we have now, it will not be error-free), and obviously,changes in workflow that may improve processes or make them worse. Education of future health care professionals must also highlight the fact that it's not just about mastering new technologies. It's a complex sociotechnical system and understanding this complexity and awareness of how parts of the system are interrelated and affect each other will be key (understanding of complexity is also a characteristic I'd look for in future students).

Aviv Shachak

Add new comment

Plain text

  • No HTML tags allowed.
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.