The increasing use of artificial intelligence (AI) should not be seen as a threat to human doctors and nurses, but rather, as a tool to boost accuracy and efficiency, says Life Healthcare CEO Peter Wharton-Hood, who believes AI will be invaluable in helping human healthcare professionals to do their jobs faster, more efficiently, and perhaps at reduced cost.
According to Future Healthcare Journal, AI can enable healthcare systems to achieve their “quadruple aim” by “democratising and standardising a future of connected and AI augmented care, precision diagnostics, precision therapeutics and, ultimately, precision medicine”.
Research in the application of AI healthcare continues to develop rapidly, with potential use cases being demonstrated across the sector – for both physical and mental health – including drug discovery, virtual clinical consultation, disease diagnosis, prognosis, medication management and health monitoring.
But, adds Wharton-Hood, “it is not going to replace doctors and a robot cannot replace a nurse. In the context of healthcare, it’s hands-on from nurses (and doctors) that heals patients”.
“To my mind, the knowledge base available in AI will help doctors the same way that computers help pilots, but you are not going to replace the doctor and you’re not going to replace the nurse. They’ve been here since Florence Nightingale’s times and they will be here forever.”
In its 2024 annual report, Life Healthcare noted that the sector was “undergoing significant digital transformation spurred by technological breakthroughs and shifting consumer expectations”, reports BusinessLIVE.
“Innovation continues to accelerate”, it says, and recent strides in AI are pushing this change, highlighting the rapid growth of remote monitoring devices, wearables and enhanced data storage and analysis capabilities that are boosting the efficiency and scope of digital healthcare services.
Risks of bias
Yet despite the many advancements, there are some risks to weigh up. The Mayo Clinic points out that if not properly trained, AI can lead to bias and discrimination.
For example, “if AI is trained on electronic health records, it is building only on people who can access healthcare and is perpetuating any human bias captured within the records”.
Second, “AI chatbots can generate medical advice that is misleading or false, which is why there’s a need for effectively regulating their use”.
See more from MedicalBrief archives:
AI and what it might mean for healthcare in SA
AI helps drugmakers slash clinical trial costs and time
AI chatbots outstrip doctors in diagnoses – US randomised study
Ethical dilemmas as medicine intersects with AI chatbots