January 4, 2026 - 8:00am

Artificial intelligence has been hailed as the answer to numerous problems in healthcare, ranging from early diagnoses to administrative shortages. Yet it is also increasingly seen as a cause of harm, adding to the existing pile of bad information about disease and treatments. The Guardian has now reported that Google’s AI Overviews, which are increasingly appearing at the top of Google searches, are inaccurate and misleading on subjects including liver tests and pancreatic cancer. Google, meanwhile, says it has “a knowledge base of billions of facts” and maintains that “the vast majority provide accurate information”.

Many doctors would be delighted for AI to transform their working lives. It would be ideal if, for example, there were checks on whether the people who were meant to have blood tests actually did, with the results consulted and plans for a follow-up put in place. It should be possible to tailor written information about conditions to individual people’s preferences, using varying detail and visual help such as diagrams depending on what people want. GPs receive a huge amount of paperwork every day; filtering out what is urgent and what is not could save hours of time. I am, emphatically, not “against” AI: I am simply against poorly tested AI which makes claims far beyond proof.

The use of AI in healthcare increasingly resembles an inadequately regulated mess which risks misleading patients while making us all stupid. Who could forget Babylon Healthcare, which received millions in funding for an app which used AI to act as a triage system and was supposedly a “GP in your pocket”? In 2018, then-Health Secretary Matt Hancock proudly stated that it “works brilliantly for me”. Despite the scheme’s obvious failings, it was difficult to convince any regulator to investigate unproven claims about safety measures. Babylon eventually went bust in 2023.

There is clearly a culture clash between medicine and AI. In the former, practitioners are taught to test hypotheses before making claims of effectiveness. This is the only thing that stands between doctors and quackery, medicine having a long history of avoidable harm to patients. In the muscular world of technology, meanwhile, innovation and branding tend to attract attention, investment and kudos; AI moguls prioritise moving fast and breaking things, while sceptics who demand further evidence are dismissed as ignorant Luddites.

Yet patients and their families who are harmed by bad information often go uncounted in balance sheets. Unless you carry out formal tests of what information people receive, you are risking harms which, if you don’t look for them, you won’t learn from. What’s more, the use of bad AI can stop us thinking for ourselves. I have been horrified by how normalised the use of AI is in university assessments, letters to medical journals, CVs and job applications. Faulty syntax is preferable to recycled ideas, produced with the same template, which dull the fantastic potency of human ideas.

There is no shortage of good health information online, if you know where to look. When AI-powered search engines produce “the facts”, it’s tempting to believe them. Humans crave authority and that requires trust, but what we have at present doesn’t deserve it. Critical-thinking skills are vital for any internet user. It’s good to question what we are told, especially when it comes from “black boxes” where we don’t know the working-out process. AI has the potential to provide huge benefits to those seeking medical assistance, but we need to get our priorities right and test hypotheses first. Bad information can be just as harmful as the side effects from drugs.


Margaret McCartney is a GP and broadcaster.

mgtmccartney