A new UK study says one in seven people are already using chatbots for health advice instead of contacting a GP or another NHS service. One in ten say they have used AI for mental-health therapy or wellbeing support instead of seeing a trained professional. 25% of people making the switch to digital, unregulated medical advice cite long NHS wait times as part of the reason, 20% of people said AI did not encourage them to seek professional help, and 21% said they decided against professional advice because of what a chatbot told them.

The new study by King’s Health Partners reveals a worrying change in behaviour rather than a passing curiosity. According to the results, people have transitioned beyond just asking ChatGPT if their headache may be down to stress, and we’re seeing a growing share of the public letting unregulated consumer AI models shape decisions about whether or not to see a GP. Professor Graham Lord, the lead author, said the country is seeing “an unregulated AI healthcare system alongside the NHS.”
Along with Responsible AI UK and the Policy Institute at King’s College London, King’s Health Partners surveyed more than 2,000 adults and found that convenience was the biggest reason people turned to chatbots for medical advice, cited as 46% of adults, followed by “curiosity” at 45%, uncertainty about whether the issue was serious enough for a GP at 39%, and NHS waiting times at 25%. The combination of frustration, convenience, and doubt is a revealing one, as it means people are not turning to AI because they trust it more than professionals. Instead, they are using chatbots simply because they are easier, faster, and more available.
Consumer AI is not a regulated triage service. It does not examine individual patients, order tests, observe subtle signs, or take responsibility for any outcomes. The Royal College of General Practitioners’ president, Prof Victoria Tzortziou Brown, called the finding “highly concerning” and warned that artificial intelligence cannot fully understand someone’s history or make safe clinical judgements. She also warned that the information given by these models can be inaccurate, misleading, or stripped of crucial context.
Mayo Clinic’s reporting echoes this sentiment, admitting that while AI can answer health questions in seconds, diagnosis and treatment are too complex for general-purpose machines. Mayo warns that these tools do not have access to full patient records, cannot examine the individual, cannot reason like a GP or medical professional, and can hallucinate plausible-sounding but false advice. As per its own piece:
“AI chatbots give answers based on patterns in data. They don’t “know” facts in the way a health professional does. Sometimes AI information sounds true but is completely incorrect. This is known as hallucination. For example, when asked how to get more minerals from food, AI has been known to recommend eating rocks.”
The problem is not only accuracy, but also trust. Another paper, titled “People over trust AI-generated medical responses and view them to be as valid as doctors, despite low accuracy” reveals exactly that. It also noted that, regardless of the inaccurate responses and guidance, people are still as likely, or even more likely, to act on them as they would on a GP’s or medical professional’s advice. It’s a deeply concerning mix: systems that are often wrong, users who can’t tell if they’re wrong, and a healthcare environment so strained that people are willing to replace proper treatment with simple convenience.
However, not everyone is on board with the shift to digital triage. The study also revealed that large parts of the public are already sensing the danger. Support and opposition to AI in clinical decision-making were almost evenly split, with 37% in favour and 38% opposed, but the top emotion the public associated with AI doing clinical tasks in the NHS was anxiety, reported by 39% of people. Overall, people were twice as likely to choose a negative emotion as a positive one. 75% said automated tools used in patient care should be officially approved and regulated, even if it slowed down adoption.
The demand for regulation and additional care is clear, but runs head-first into today’s reality. King’s College London says there is no single UK regulatory framework for AI in healthcare, and notes that critics including the Nuffield trust and Royal College of Physicians have described the situation as a “wild west” of AI adoption. The public, for now, understands the gap quite clearly, and currently want proper oversight and the ability to opt out. However, the institutions behind the AI roll-out are far behind that expectation.
People, as a result, are facing the emergence of a parallel system. On one side, the NHS is slow, rationed and overburdened, but regulated. On the other is an instant, ever-present, but unregulated shadow service. This study is a current assessment of the real-time shift towards AI health advice, not a future-facing report.
There’s also a class dimension that’s being missed. The National Centre for Social Research published a study in March 2025 that explored the public perceptions of artificial intelligence, including experiences of using it, views on its application across sectors, and opinions on how AI should be regulated or governed.
It revealed that people from lower socioeconomic backgrounds are less likely to trust artificial intelligence systems, and more likely to think they reinforce existing inequalities. Yet these are the same groups most likely to feel the effects of NHS access problems. If long waits and overstretched services nudge people towards unreliable digital shortcuts, then AI is not smoothing inequality, but rather layering a new form of risk on top of existing weakness.
Some may argue that this is simply the modern version of Googling symptoms, which is something the public has always done before seeing a GP. However, in those cases, people are delivered to websites and forums that at least made the user sift through sources that were contributed to by real human beings. Chatbots collapse that process into a single answer written in the tone of a calm, informed authority. As such, the user is now receiving what looks like a professional verdict instead of comparing information themselves from various sources. That’s what makes the experience more dangerous: the machine acts like it knows, and people are generally not positioned well enough to know if it doesn’t.
One in seven getting advice from AI instead of seeing a GP is not a statistic about innovation. What we’re seeing unfold is institutional drift, with patients improvising around a strained system with tools that remain unregulated, unevenly understood, and over-trusted. The technology is not suddenly more medically reputable, but its confident, immediate, personalised responses are encouraging people to trust it nonetheless.
The Expose Urgently Needs Your Help…
Can you please help to keep the lights on with The Expose’s honest, reliable, powerful and truthful journalism?
Your Government & Big Tech organisations
try to silence & shut down The Expose.
So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.
The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.
Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.
Please choose your preferred method below to show your support.
Categories: Uncategorised