TL;DR

Nearly three in ten UK GPs are now using AI tools including ChatGPT in patient consultations, despite concerns about professional liability and clinical errors. Research describes a “wild west” lack of regulation leaving doctors unsure which tools are safe to use.

Rapid Adoption Outpacing Oversight

The Nuffield Trust study, based on a Royal College of GPs survey of 2,108 family doctors, found 28% already using AI. More male GPs (33%) than female (25%) have adopted the technology, with significantly higher usage in affluent areas compared to poorer ones.

Dr Becks Fisher, the thinktank’s director of research and policy, warns: “There is a huge chasm between policy ambitions and the current disorganised reality of how AI is being rolled out and used in general practice.”

Concerns Over Safety and Liability

Large majorities of GPs worry about “professional liability and medico-legal issues,” “risks of clinical errors,” and “patient privacy and data security.” The regulatory landscape varies dramatically—some NHS regional integrated care boards support AI use whilst others ban it entirely.

Dr Charlotte Blease of Uppsala University notes: “The real risk isn’t that GPs are using AI. It’s that they’re doing it without training or oversight.”

Time Saved—But Not for More Patients

In a blow to ministerial hopes that AI could reduce GP waiting times, the survey found doctors use saved time “primarily for self-care and rest, including reducing overtime working hours to prevent burnout,” rather than seeing additional patients.

Looking Forward

A government commission launched in September will make recommendations on safe, effective and properly regulated AI use in healthcare. Meanwhile, Healthwatch England reports around one in ten patients are turning to AI for health information when they cannot access GP appointments—though advice quality remains “inconsistent.”


Source: The Guardian

Share this article