Having recently launched the AI Safer Practice Framework at Medical Protection and Dental Protection – a model designed by Dr Raj Rattan, dental director, to help healthcare professionals integrate AI safely and responsibly into clinical practice – he now shares further reflections on the evolving role of AI in healthcare.
He writes:
Advancements in healthcare have been driven by innovations in biomaterials, medical technology and pharmacology. From laparoscopic surgery and diagnostic imaging to electronic health records and targeted drug therapies, these developments have transformed clinical practice, both in what we do and how we do it.
AI marks a new frontier, a transformative and rapidly evolving system capable of interpreting data, analysing images, predicting outcomes, and sometimes recommending interventions. The future we imagine is already pressing at our door, such is the pace of development.
AI oversight
At the centre of today’s debate on AI in healthcare is the question of oversight. Most current frameworks, including Medical Protection’s Safer Practice AI Framework, emphasise human-in-the-loop (HITL) systems – AI outputs that are supervised, validated and ultimately signed off by a human clinician.
The logic is straightforward: humans bring professional judgment, context and accountability. In this way, the healthcare professional is seen as the strong link in the chain, ensuring that patient safety is not compromised by technology and algorithmic limitations.
This assumption deserves closer examination. Why do we believe that humans always get it right? Humans, too, are prone to error – subject to fatigue, bias, and cognitive overload – and their involvement does not guarantee infallibility.
Diagnostic error remains one of the leading causes of harm in healthcare worldwide. Cognitive biases such as confirmation bias, anchoring, and availability heuristics, distort decision-making, even in experienced hands. Stress, fatigue, workload and commercial pressures also take their toll. The truth is that the “human factor” is already a weak spot in clinical safety.
By positioning HITL as the ultimate safeguard against AI error, we risk over-estimating human reliability while under-estimating human vulnerability.
The VIE model
What role do healthcare professionals play in guiding and governing emerging technologies? One way to frame this responsibility is through the VIE model, which outlines a continuum of oversight as AI systems evolve. It begins with Verification, where clinicians ensure data integrity and system reliability.
It progresses to Interpretation, where clinical expertise adds meaning and context to algorithmic outputs.
Finally, it extends into Enablement, as AI systems become more autonomous. Each phase builds upon the last, reflecting the evolving ethical standards in today’s digital healthcare landscape.
The VIE mode is, essentially, a shift from verification in the present to enablement in the future.
Paradox
As AI continues to improve, the difference between human and machine performance may widen. The clinician may increasingly become the weak link, slower – although that may be an asset, more inconsistent, and more error-prone than the technology. At that inflection point, the original logic of HITL starts to reverse. Instead of being the strong link that corrects machine error, the human risks becoming the weak link that introduces error into an otherwise reliable system. It is an unsettling paradox.
The current position
I must stress that I am not suggesting the trust and reliance of human oversight is misplaced. Quite the opposite.
At present, AI systems remain fragile, they lack transparency and are vulnerable to bias in their training data. AI cannot fully understand the human dimensions of care – patient values, preferences and context. The clinician is still essential, not just for validating outputs but for discussions around uncertainty of outcomes and gaining consent for example. These responsibilities cannot be delegated to algorithms.
Looking ahead
We must also look ahead. Large language models (LLMs) and other generative AI systems are advancing rapidly. Their ability to synthesise information, adapt to context, and mimic reasoning suggests that autonomous AI in healthcare may be on the horizon.
When that day arrives, the key question will not be whether humans should stay in the loop, but whether the loop itself should be redesigned.
We cannot cling to the belief that human oversight will always be the gold standard of safety. Instead, we must be realistic about both the strengths and limitations of human judgment, and we must develop governance frameworks that can evolve as the balance of responsibility between humans and machines change.
Facing the future
We must approach it neither with blind optimism nor paralysing fear, but with reason and clarity. LLMs may also lay the groundwork for a different future, one in which autonomy in AI is an operational reality.
As I finish editing this article, I take a short break and check my emails. There is an email from an AI developer with whom I have spoken before. It is marked confidential and the subject of the email reads ‘autonomous AI’ and attached to the email is a non-disclosure agreement.
I am reminded of Einstein’s words: I never think of the future. It comes soon enough.
It just did.
The AI Safer Practice Framework: Find out more
See more from MedicalBrief archives:
New MPS framework supports safer AI use in healthcare
Potential of medical liability pitfalls with increasing AI use
Making AI usable, useful and safe – for clinicians and patients
