Brits are quietly ditching their GP for ChatGPT – and 1 in 5 skipped a real doctor because of what it told them
Key Points
- 15% of Britons have used AI chatbots for health advice instead of contacting a GP or NHS service, and 10% have used AI for mental health support instead of a trained professional
- 21% of those who consulted AI decided against seeing a real doctor based on what the chatbot told them, and 20% say the AI did not encourage them to seek professional advice
- AI chatbots have been shown to misdiagnose up to 80% of early medical cases, raising serious safety questions about this shift
- The public guess 39% of GPs use AI in clinical decisions; the real figure is 8%, a fivefold overestimate
- 76% want AI tools in the NHS officially approved and regulated even if it slows adoption, against just 17% who disagree
One in seven Britons have swapped their GP for an AI chatbot, and 21% of those who asked an AI about a worrying symptom went on to skip a real doctor’s appointment because of the answer, according to a major new study from King’s Health Partners, Responsible AI UK and the Policy Institute at King’s College London.
A fifth of the people in this survey who used an AI for a health worry decided, on the strength of whatever the chatbot told them, not to go and see anyone qualified about it.
Another 20% say the AI never even nudged them towards a professional. Meanwhile, recent research suggests AI chatbots get early medical cases wrong up to 80% of the time.
The public think AI is everywhere in the NHS. It is not.
Ask the average Brit how many GPs use AI in clinical decisions, and they will guess 39%. The actual figure is 8%. That gap, of around five times reality, says a lot about how this conversation has been shaped.
People are not anxious about AI in the NHS because it is already running the show. They are anxious because they think it is.
Half of 18 to 24 year olds (49%) oppose AI being used in NHS clinical decisions, compared with 36% of the over 65s.
The age group most fluent with these tools is also the one most uneasy about a hospital using them.
One in four young Britons (25%) say AI has been bad for their own mental health, and 19% say the same about their physical health, the highest of any age group.
Women oppose AI in clinical decisions at 46%, men at 30%. Women are uncomfortable with the idea of a GP using an AI chatbot in their appointment at 65%, men at 45%. Women feel anxious about AI in clinical settings at 46%, men at 31%.
Across every consent question in the survey, women are 7 to 11 percentage points more likely than men to say patients should be told in advance and given the option to opt out.
One in ten have quietly replaced a therapist with a chatbot.
The mental health number is the one that stops you. 10% of the public say they have used AI for therapy or wellbeing support instead of seeing a trained professional.
The most common reasons people give for going to AI over a GP are convenience (46%), plain curiosity (45%), uncertainty about whether their concern was serious enough to bother anyone (39%) and being made to wait too long for the NHS (25%).
Among those who have used AI for health advice, 59% say it has been good for their physical health and 53% say the same about their mental health.
The view of what AI is doing to the rest of the country is darker: 42% think AI chatbots are bad for the public’s mental health, only 31% think they are good.
Trust me, I’m tired.
46% of the public say they trust a doctor much more than AI for psychological therapy. 1% say the same about AI. But when the survey added one detail, that the doctor was at the end of a long shift, trust in the doctor for psych therapy fell from 46% to 25%.
For skin cancer detection from photos, it dropped from 30% to 16%. The public’s preference for a human is conditional, and the condition is whether the human is at their best.
If an NHS-approved AI ever disagreed with a doctor’s diagnosis, 55% would want a second doctor to weigh in before any decision was made. Only 7% would follow the AI on the basis that it might be more accurate.
Britain wants the rulebook before the rollout.
76% of the public want AI tools used in patient care officially approved and regulated, even if that slows adoption down. Only 17% would let doctors pick whatever AI tools they fancy.
The country also wants a say, with 58 to 63% wanting to be told in advance and given the option to opt out across four common scenarios. In most of those scenarios the NHS is not required to offer one.
The Nuffield Trust and the Royal College of Physicians have described the current AI in healthcare landscape as a “wild west.” A National Commission into the Regulation of AI in Healthcare is now examining how oversight should work.
Graham Lord, Executive Director of King’s Health Partners, said the findings underline the scale and pace at which AI is already shaping how people access care, and warned that responsibility currently lands on clinicians “even where they have limited control over how AI tools are introduced.”
Amy Clark, Senior Policy Fellow at the Policy Institute, said the scepticism among women and young people “challenges the assumption that familiarity with new technology creates acceptance.”
Sarvapali Ramchurn, Chief Executive Officer of Responsible AI UK, an outfit that connects researchers with policy makers and clinicians on AI safety, said the trend in the data is that the public is “trusting AI even when they should not.”
A seventh of Britain is already using AI as a first line of health advice. A fifth of that group is being talked out of seeing a real doctor by something that gets early diagnoses wrong four times out of five.
The country is not asking for AI to be stopped. It is asking for someone to be holding the wheel. On these numbers, nobody is.