back to top
Wednesday, 15 October, 2025
HomeMPS ColumnMaking AI usable, useful and safe – for clinicians and patients

Making AI usable, useful and safe – for clinicians and patients

Around the world, healthcare professionals and governments are recognising the potential of AI to enhance access to care, improve quality and increase efficiency.

In South Africa, where public health systems grapple with workforce shortages, limited resources, and a high burden of disease, AI offers significant opportunities to help address these challenges, writes Dr Volker Hitzeroth, medico-legal consultant at Medical Protection Society.

He writes:

Of course, where there is great opportunity there is often also risk, and the integration of AI is no exception.

The HPCSA acknowledges both the promise and the pitfalls of AI in healthcare in its Ethical Guidelines on the Use of Artificial Intelligence – Booklet 20, published earlier this year, in which it says there is “enormous potential for AI to improve accessibility, enhance quality and reduce administrative burden”. But it also warns of the “significant ethical, legal and professional challenges” associated with AI, including the threat of over-reliance on technology.

It adds that AI is accompanied by risks like discrimination due to historical data and automation bias, and the potential erosion of clinical skills among health practitioners.

The risks and opportunities of AI tools are further explored in a White Paper published earlier this year – a collaboration between the MPS Foundation, the Centre for Assuring Autonomy at the University of York and the Improvement Academy hosted at the Bradford Institute for Health Research in the UK.

The paper calls on global governments, AI developers and regulators to ensure AI tools are integrated into healthcare delivery in a way that is usable, useful, and safe for both patients and practitioners. It argues the greatest threat to AI uptake in healthcare is the “off switch”, if frontline practitioners see the technology as burdensome, unfit for purpose or are wary of how it may affect their decision-making, patients and licences.

Put simply, if AI works well for clinicians, they are more likely to embrace and interact with it, which will play a significant role in unlocking the potential benefits to patients.

The White Paper builds on results from the Shared Care AI Role Evaluation (CAIRE) research project. Medical Protection Society established the MPS Foundation to support cross-disciplinary research just like this, and this project was funded as part of the first annual grant programme.

The CAIRE project team – which brought together researchers with expertise in medicine, AI, human-computer interaction, law, ethics, and safety science – evaluated different ways in which AI technology could be used by clinicians. This ranged from tools which simply provide information, through to those which liaise directly with patients outside the consultation room, and those which proffer recommendations to clinicians.

This elicited a rich set of findings – outlined in the White Paper – highlighting the need for a thoughtful approach to integrating AI decision-support tools into real-world clinical settings, ensuring they genuinely support the practitioners using them while preserving the important human touch in patient care.

While the paper was developed by a UK-based research team, many of the findings will have global applicability. As AI technologies and their use in our healthcare system rapidly evolve, it is therefore important to reflect on these findings.

First, the paper recommends that for AI tools to work for users they need to be designed with users. In healthcare contexts, which are safety-critical and fast-paced, engaging clinicians in the design of all aspects of an AI tool – from the interface to the details of its implementation – can help to ensure these technologies deliver more benefits than burdens.

A participatory approach involving different domain experts, practitioners, and patients could ultimately help to ensure AI decision-support tools are usable, useful, and safe.

Involving practitioners in the design and development of these tools can also help in discovering the “sweet spot” between the provision of too much and too little information. Too much information, and the time it takes to review it detracts from building a rapport with a patient. Too little, and practitioners may not trust the technology.

Achieving the right balance requires more than just designing the interface and defining the tool’s functionality. AI companies and developers need to understand the broader context in which the AI tool will be used, including its impact on workflows and patient-clinician relationships.

As such, engaging with and collaborating with practitioners during design and development is crucial.

Second, AI tools should not be considered akin to senior colleagues in clinician-machine teams. Practitioners should not always be expected to agree with or defer to an AI output, whether that output is a direct recommendation, a classification, or an analysis of the data. It should be made explicit in new healthcare AI policy guidance and in guidance from healthcare organisations how clinicians should approach conflicts of opinion with the AI.

It should also be made clear that, in cases of disagreement, a clinician should not be expected to defer to an AI output.

Practitioners should regard AI as an adjunct, a tool. They should not think of it as a replacement for – or improvement on – either their own clinical judgment or the judgment of a trusted human colleague.

Practitioners should feel empowered to disagree with AI recommendations, particularly when the recommendation is suboptimal and does not align with their own clinical judgment.

The White Paper condenses its recommendations into some practical advice for practitioners:

1. Practitioners should ask for training on the AI tools they are expected to use, to help them to navigate the tool more skilfully and to know when confidence in an AI’s outputs would be justified, supporting their autonomy. This training should cover the AI tool’s scope, limitations and decision thresholds, as well as how the model was trained and how it reaches its outputs.

2. Practitioners should only use AI tools within areas of their existing expertise. AI tools should not be used outside that expertise. If there are specific cases where a practitioner’s knowledge is limited, they should seek the advice of a human colleague who understands the area well and can oversee the AI tool, rather than rely on the tool to fill their knowledge gap.

3. Practitioners should feel confident to reject an AI output they believe to be wrong, or even suboptimal for the patient. They should resist any temptation to defer to an AI’s output to avoid or reduce the likelihood of being held responsible for negative outcomes.

4. Practitioners should regard the input from an AI tool as one part of a wider, holistic picture concerning the patient, rather than the most important input into the decision-making process. They should be aware that AI tools can be fallible, and those which perform well for an 'average’ patient may not perform well for the individual in front of them.

5. Practitioners should feel empowered to trust their instincts and judgment about appropriate disclosure of the use of an AI tool, as part of a holistic, shared decision-making process with individual patients. However, they should also be aware that in some critical situations, patients should be made aware of the use of an AI tool.

6. Clinicians should engage with healthcare AI developers, when asked and where possible, to ensure the tools are user-focused and fit for purpose for their intended contexts.

Looking to the future, generating greater confidence in AI among healthcare practitioners is vital if the potential benefits are to be unlocked for patients. The White Paper addresses this issue boldly, and is a timely contribution to the wider AI debate.

To learn more about how AI is transforming medical practice, MPS is hosting a webinar on 7 October. The ‘AI in Clinical Medicine: Progress and Pitfalls’ webinar will examine the clinical, practical, and medico-legal implications of AI, helping healthcare professionals understand its benefits and risks. A new framework – designed to help integrate AI safely and responsibly – will also be introduced.

Visit here for more information and to register.

 

See more from MedicalBrief archives:

 

AI threat of ‘de-skilling’ for medical trainees

 

The challenges of rapidly evolving AI in healthcare

 

AI changing radiology, but not replacing human input

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.