AI in healthcare creates liability complexity, experts warn

TL;DR: Medical AI tools are creating legal complexity around establishing liability when treatments fail. Patients face barriers to proving fault, whilst multiple parties—from developers to clinicians—may point to each other as responsible. Concerns also exist about inadequate testing before deployment.

The development of AI for clinical use has created a legally complex environment for establishing responsibility when medical failings occur, according to experts contributing to a major report on artificial intelligence in healthcare.

Context and Background

The Journal of the American Medical Association hosted a summit bringing together clinicians, technology companies, regulatory bodies, insurers, ethicists, lawyers and economists to examine AI’s role in healthcare. The resulting report highlights that patients could face significant difficulties showing fault where an AI system is involved in their care.

Prof Glenn Cohen from Harvard Law School notes that barriers to gaining information about an AI system’s inner workings, combined with the challenge of proposing reasonable alternative designs, make it difficult to prove a poor outcome was caused by the AI. The interplay between multiple parties—developers, healthcare providers, and technology vendors—creates additional complexity, as each may point to others as being at fault or have existing contractual agreements reallocating liability.

The report also raises concerns about evaluation of AI tools, noting many fall outside regulatory oversight. Tools deemed effective in pre-approval packages may perform differently in varied clinical settings with different patients and users of varying skill levels.

Looking Forward

Prof Derek Angus from the University of Pittsburgh observes that instances where something appears to go wrong will inevitably lead people to look for someone to blame. Prof Michelle Mello from Stanford Law School suggests that whilst courts are equipped to resolve these issues, early inconsistencies will elevate costs across the AI innovation and adoption ecosystem.

The report emphasises the need for funding to properly assess AI tool performance in healthcare settings, with investment in digital infrastructure identified as crucial. Angus notes a concerning pattern: “The tools that are best evaluated have been least adopted. The tools that are most adopted have been least evaluated.”

Source Attribution:

Share this article