The use of artificial intelligence in healthcare could create a legally complex blame game when it comes to establishing liability for medical failings, experts have warned.
The Guardian reports that the development of AI for clinical use has boomed, with researchers creating a host of tools, from algorithms to help interpret scans to systems that can aid with diagnoses. AI is also being developed to help manage hospitals, from optimising bed capacity to tackling supply chains.
But while experts say the technology could bring myriad benefits for healthcare, they say there is also cause for concern, from a lack of testing of the effectiveness of AI tools to questions over who is responsible should a patient have a negative outcome.
Professor Derek Angus, of the University of Pittsburgh, said: “There are definitely going to be instances where there’s the perception that something went wrong and people will look around to blame someone.”
The summit on Artificial Intelligence, hosted last year by the Journal of the American Medical Association, brought together a panoply of experts including clinicians, technology companies, regulatory bodies, insurers, ethicists, lawyers and economists.
The resulting report, of which Angus is first author, not only looks at the nature of AI tools and the areas of healthcare where they are being used, but also examines the challenges they present, including legal concerns.
Professor Glenn Cohen from Harvard law school, a co-author of the report, said patients could face difficulties showing fault in the use or design of an AI product. There could be barriers to gaining information about its inner workings, while it could also be challenging to propose a reasonable alternative design for the product or prove a poor outcome was caused by the AI system.
“The interplay between the parties may also present challenges for bringing a lawsuit – they may point to one another as the party at fault, and they may have existing agreement contractually reallocating liability or have indemnification lawsuits,” he said. Professor Michelle Mello, another author of the report, from Stanford law school, said courts were well equipped to resolve legal issues.
“The problem is that it takes time and will involve inconsistencies in the early days, and this uncertainty elevates costs for everyone in the AI innovation and adoption ecosystem,” she said.
The report also raises concerns about how AI tools are evaluated, noting many are outside the oversight of regulators such as the US Food and Drug Administration (FDA).
Angus said: “For clinicians, effectiveness usually means improved health outcomes, but there’s no guarantee that the regulatory authority will require proof (of that). Then once it’s out, AI tools can be deployed in so many unpredictable ways in different clinical settings, with different kinds of patients, by users who are of different levels of skills. There is very little guarantee that what seems to be a good idea in the pre-approval package is actually what you get in practice.”
The report outlines that at present there are many barriers to evaluating AI tools including that they often need to be in clinical use to be fully assessed, while current approaches to assessment are expensive and cumbersome.
Angus said it was important that funding was made available for the performance of AI tools in healthcare to be properly assessed, with investment in digital infrastructure a key area. “One of the things raised during the summit was [that] the tools that are best evaluated have been least adopted. The tools that are most adopted have been least evaluated.”
Study details
AI, Health, and Healthcare Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence
Derek Angus, Rohan Khera, Tracy Lieu et al.
Published in JAMA Network on 13 October 2025
Abstract
Importance
Artificial intelligence (AI) is changing health and healthcare on an unprecedented scale. Though the potential benefits are massive, so are the risks. The JAMA Summit on AI discussed how health and healthcare AI should be developed, evaluated, regulated, disseminated, and monitored.
Observations
Health and healthcare AI is wide-ranging, including clinical tools (eg, sepsis alerts or diabetic retinopathy screening software), technologies used by individuals with health concerns (eg, mobile health apps), tools used by healthcare systems to improve business operations (eg, revenue cycle management or scheduling), and hybrid tools supporting both business operations (eg, documentation and billing) and clinical activities (eg, suggesting diagnoses or treatment plans). Many AI tools are already widely adopted, especially for medical imaging, mobile health, healthcare business operations, and hybrid functions like scribing outpatient visits. All these tools can have important health effects (good or bad), but these effects are often not quantified because evaluations are extremely challenging or not required, in part because many are outside the US Food and Drug Administration’s regulatory oversight. A major challenge in evaluation is that a tool’s effects are highly dependent on the human-computer interface, user training, and setting in which the tool is used. Numerous efforts lay out standards for the responsible use of AI, but most focus on monitoring for safety (eg, detection of model hallucinations) or institutional compliance with various process measures, and do not address effectiveness (ie, demonstration of improved outcomes). Ensuring AI is deployed equitably and in a manner that improves health outcomes or, if improving efficiency of health care delivery, does so safely, requires progress in 4 areas. First, multistakeholder engagement throughout the total product life cycle is needed. This effort would include greater partnership of end users with developers in initial tool creation and greater partnership of developers, regulators, and healthcare systems in the evaluation of tools as they are deployed. Second, measurement tools for evaluation and monitoring should be developed and disseminated. Beyond proposed monitoring and certification initiatives, this will require new methods and expertise to allow health care systems to conduct or participate in rapid, efficient, and robust evaluations of effectiveness. The third priority is creation of a nationally representative data infrastructure and learning environment to support the generation of generalisable knowledge about health effects of AI tools across different settings. Fourth, an incentive structure should be promoted, using market forces and policy levers, to drive these changes.
Conclusions and Relevance
AI will disrupt every part of health and healthcare delivery in the coming years. Given the many long-standing problems in healthcare, this disruption represents an incredible opportunity. However, the odds that this disruption will improve health for all will depend heavily on the creation of an ecosystem capable of rapid, efficient, robust, and generalisable knowledge about the consequences of these tools on health.
See more from MedicalBrief archives:
Making AI usable, useful and safe – for clinicians and patients
The challenges of rapidly evolving AI in healthcare
AI algorithms in diagnosis could harm patients – Dutch study