December 16, 2025
2025 will go down as the year communication cracked under pressure. With health anxieties high and digital tools more accessible than ever, misinformation didn’t stay online; it seeped into exam rooms, treatment decisions, and patient minds. At the same time, the rapid embrace of generative AI promised to revolutionize medical education, content creation, and patient outreach, but too often delivered confident falsehoods instead of facts.
A 2025 survey found that over one-third of Canadians reported avoiding effective healthcare treatments due to misinformation, a marked increase from the year before. Nearly a quarter stated they experienced negative health outcomes after following medical advice sourced online. Trust in reliable, evidence-based communication dropped even as people searched harder for answers.
In parallel, generative AI tools entered the medical arena with fanfare. From radiology to patient education to medical-communications workflows, many embraced AI as a shortcut to scale. But startling new data revealed a major flaw: a study led by researchers at Icahn School of Medicine at Mount Sinai demonstrated that popular AI chatbots, when given false or misleading prompts, didn’t just echo misinformation; they often elaborated on it confidently, inventing plausible-sounding symptoms and treatments where none existed.
Even when AI responses appeared “grounded,” they carried hidden dangers. A 2025 review of large language models (LLMs) used in healthcare concluded that both hallucination, generation of factually incorrect information, and “sycophancy”, a tendency to agree with the user’s stated beliefs, can lead to misleading medical advice [1,2].
The upshot: tools meant to democratize and accelerate medical communication risked turning into vectors of misinformation, undermining trust, patient safety, and public health.
In 2025, a 60-year-old man reportedly followed dietary advice obtained via ChatGPT: he replaced sodium chloride (table salt) with sodium bromide, believing it was a healthier choice.
After several months, he developed severe neurological and psychiatric symptoms, ultimately diagnosed as bromide toxicity (bromism). The man was hospitalized, treated, and later discharged, but the event underscored a dangerous reality: when AI gives medical-style advice in response to user queries, its guidance can lead to real harm.
Why It Matters: Chatbots lack context on user health history, medication, diet, and individual risk, they treat all input as generic text. An innocent-sounding suggestion, taken literally, may cross the line into harmful advice.
A major problem is simply that volume doesn’t equal validity. With urgent demand for medical information, many content creators leveraged AI to spin up “scientific content”, but often without human review, without clinical context, and without clear citations. In other cases, naturally well-meaning but scientifically uninformed individuals repackaged pseudoscience as “alternative medicine,” capitalizing on social media reach rather than clinical evidence.
Moreover, the structural incentives are misaligned: engagement-driven platforms reward sensationalism; algorithms prioritize shareability over accuracy. As a result, false or exaggerated claims spread faster, especially where regulation or oversight is weak. This dynamic magnified around weight-loss, hormone therapies, “anti-aging” cures, vaccines, and chronic conditions, all areas already fraught with uncertainty. Thoughtful, cautious voices were drowned out amid the noise.
A 2025 red-teaming study of publicly available AI chatbots (including ChatGPT, Gemini, Claude) found that between 5–13% of responses to patient medical questions were potentially unsafe, with many mistakes arising from “hallucinations”, confidently stated but false or misleading medical information.
In some cases, chatbots compounded the problem by expanding on inaccurate premises, making up plausible-sounding conditions or treatments that have no basis in clinical science.
Why It Matters: This shows that even widely used and ostensibly “smart” AI tools can generate content that appears authoritative but is factually wrong. Without careful human moderation, using such tools for patient-facing content or medical education can erode trust and risk patient safety.
Source: https://arxiv.org/abs/2507.18905
If 2025 taught us anything, it’s that communication matters. More than ever.
First, we need to re-center human expertise in medical-grade communication. AI may speed things up, but real-world safety demands real-world scrutiny. Medical affairs teams, physicians, and subject-matter experts must reclaim authorship of any content that touches on diagnosis, treatment, or public health guidance.
Second, transparency must become standard. Every claim, especially in public-facing materials, needs clear sourcing, context, and disclaimers. That means citing clinical trials, using plain language when translating data, and admitting uncertainty rather than overpromising.
Third, we must invest in media literacy for patients, HCPs, and communicators alike. Digital health tools, chatbots, and social platforms are here to stay. But only if users are educated to distinguish evidence from opinion, data from marketing, and fact from “hallucinated” fiction.
Fourth, regulatory and ethical guardrails are long overdue. As AI becomes embedded in medical communication workflows, we must demand accountability: internal audits, human-in-the-loop review, ethical use policies, and oversight similar to clinical data governance.
When misinformation shifts what patients, or even doctors, believe to be true, the consequences ripple. Avoidable side effects, misdiagnoses, treatment delays, and public distrust in science and medical advice all result. Especially in an era of rapid drug development, personalized medicine, and decentralized care, unreliable communication isn’t just a nuisance, it can undermine entire therapeutic efforts.
At Craft Science, we believe 2026 must be the year we repair the bridge between science and society. We must champion clarity over noise, truth over clicks, and empathy over hype. Because in medicine and in life, people deserve better than “maybe.” They deserve evidence, integrity, and trust.
References
[1] K. L. Rosen, M. Sui, K. Heydari, E. J. Enichen, and J. C. Kvedar, “The perils of politeness: how large language models may amplify medical misinformation,” npj Digit. Med. 2025 81, vol. 8, no. 1, pp. 644-, Nov. 2025, doi: 10.1038/s41746-025-02135-7.
[2] S. Chen et al., “When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior,” npj Digit. Med. 2025 81, vol. 8, no. 1, pp. 605-, Oct. 2025, doi: 10.1038/s41746-025-02008-z.