Contact Us
(833) 486-3753

News & Views

← Back to News

From Promise to Practice: AI in Medicine, Five Years On- Key takeaways from the Temerty Medicine Talk

March 3, 2026

Artificial intelligence is reshaping medicine faster than most health systems can keep up, and few people have a clearer view of that frontier than Dr. Muhammad Mamdani, director of the University of Toronto’s Temerty Centre for AI Research and Education in Medicine (T-CAIREM). On Friday we listened with interest as Dr. Mamdani spoke candidly about where AI in medicine actually stands today, and where it’s taking us, including where we are with AI adoption and literacy in Canada.

The conversation couldn’t have come at a better time. As we dig deep into our 4-part series on AI in healthcare, his insights cut right to the heart of what we’re exploring: not just what AI can do in clinical settings, but what it should do, and how we get there responsibly.

Here are 6 key takeaways that stood out from his talk:

1.  AI Literacy Is Canada’s Most Urgent Gap

Canada ranks a dismal 44th out of 47 countries in AI literacy and adoption. This is a striking vulnerability given that nearly half of Canadians are already turning to AI with their health questions, often receiving false or misleading information.  Dr. Mamdani’s message was clear: everything starts with literacy, and T-CAIREM is responding with free AI literacy courses, mirroring what the UK is already doing nationally.​ Of pressing concern are things like data privacy (free isn’t always “free” if you give them your data),  knowing how to spot hallucinations and developing prompt literacy skills (which can dramatically reduce the amount of erroneous information presented by AI).

2.  Accountability Must Come Before Adoption

Clinicians are still feeling the sting of electronic medical records (EMR) systems that promised to transform care delivery and ended up becoming a “time suck”, largely because no accountability mechanisms were built in to measure whether they were delivering on that promise. Will AI similarly fall short? Avoiding the same outcome requires discipline: clearly identifying specific problems to be addressed before deployment, rather than introducing it opportunistically and hoping value emerges. And like any productivity initiatives, you have to rigorously and proactively measure outcomes. AI scribes, for example, are only worthwhile if you can demonstrate they free up time to see more patients.​

3.  Real-World Proof Points Already Exist

Despite the skepticism, some implementations are delivering. ChartWatch at St. Michael’s Hospital, which monitors patients hour-by-hour and predicts deterioration, has shown a 26% reduction in mortality. This is the kind of precise, problem-specific deployment that separates meaningful AI from hype, and it’s the model Dr. Mamdani wants replicated.​

4.  The “Symbiotic” Window Is Closing

Right now, AI and clinicians are still symbiotic, doctors can contextualize and interact in ways AI cannot. We are shunting “menial” tasks to AI to free us up for more complex interactions. But that line is shifting rapidly as the capabilities of AI grow and the definition of “menial” tasks expands, with the potential to eliminate jobs. And the jobs impact is a real concern: unions are already pushing back with “no AI” contract clauses, intensifying the tension between lab performance and messy real-world deployment. The question of when symbiosis tips into displacement is no longer theoretical.​

5.  Net Benefit Is Likely, But Catastrophes Are Coming

Despite these cautions, Dr. Mamdani’s long-term outlook was cautiously optimistic: the net benefit of AI in medicine will be massive, but countries with poor regulation will see catastrophes, and sadly, serve as object lessons for others. The path forward requires balanced governance; enough oversight to prevent harm, but not so restrictive that innovation is stifled. Human connection and touch will still matter, but AI will fundamentally force us to reexamine what it means to be a caregiver and a patient.

Source: https://digitallibrary.cma.ca/link/digitallibrary1645

6.  Doctors have to re-evaluate the skills needed to remain relevant.

Early in the talk Dr. Mamdani highlighted a study at the Princess Margaret Cancer Centre, where cancer patients rated AI chatbot responses as significantly more empathetic than physician responses in a blinded evaluation. Not by a little, either – they rated the chatbots as more than TWICE as empathetic. While AI is very good at sounding empathetic, it’s just doing linguistic pattern-matching, it isn’t able to replicate genuine empathy. But this exposes a gap that physicians have largely been allowed to ignore: communication quality has never been rigorously trained or measured in medicine the way clinical knowledge has.

When AI can start to rapidly diagnose, take symptoms and even answer patients’ questions better, which skills will the clinician of the future need most?  If physicians don’t want to be outpaced in the one domain that has traditionally been considered irreplaceably human, they need to actively develop and hone the following:

  • Genuine presence and active listening: AI can mirror language, but it cannot offer the felt experience of being truly seen and heard by another person who has lived experience of the world.
  • Non-verbal empathy: touch, eye contact, tone, and body language are dimensions AI cannot yet replicate in clinical settings.
  • Longitudinal relationship-building: The continuity of a trusted physician who knows your history over years remains deeply valuable.
  • Contextual and moral reasoning: Doctors can contextualize things in ways AI currently cannot; this includes navigating difficult trade-offs, delivering bad news with compassion and nuance, and making judgment calls that require human moral weight​.
  • Prompt literacy and AI collaboration: rather than competing with AI, physicians who learn to work with it skillfully (directing it, checking it, and personalizing its outputs) will be far better positioned than those who resist it entirely.

This is a thread we’ve pulled before. In our article on the changing face of CME, we made the case that patient-centered care needs to move from aspiration to accountable practice. The rise of AI makes that argument even more urgent.

 

The Bottom Line

AI in medicine is not a distant future. It’s an unfolding present, full of both remarkable promise and real peril. The question was never really whether AI would transform healthcare. The question, the one Dr. Mamdani kept returning to, is whether we will be disciplined, literate, and accountable enough to guide that transformation well.

That means investing in education before deployment, measuring outcomes before claiming victories, and developing the irreplaceably human skills that no algorithm can authentically replicate. The clinicians, health systems, and countries that get this right will lead. Those that don’t will become cautionary tales.

It’s still early enough to choose which one we want to be.

(You can watch the full Temerty Medicine lecture here.)