back to top
Thursday, 19 June, 2025
HomeA FocusThe challenges of rapidly evolving AI in healthcare

The challenges of rapidly evolving AI in healthcare

The first pregnancy facilitated by an artificial intelligence (AI) procedure has been carried out in the US. And approval for medicines and medical devices will soon be sped up as the Food and Drug Administration (FDA) plans to switch to AI to “radically increase efficiency”.

These are among dozens of AI developments rapidly transforming healthcare, with applications ranging from diagnostics and predictive analytics to administrative automation. MedicalBrief writes that while AI holds immense potential to enhance clinical efficiency and improve patient outcomes, its integration into medical practice is not without challenges.

Physicians remain divided; some view AI as a powerful tool for augmenting medical decision-making, while others question its reliability, ethical implications, impact on the physician-patient relationship, and the issue of liability when things go wrong.

One of the top priorities of the FDA, according to Dr Marty Makary, the agency commissioner, and Dr Vinay Prasad, who leads the division that oversees vaccines and gene therapy, is AI technology for drug approvals.

Outlining their new plans in JAMA Network, they said they want to escalate the final stages of drug or medical device approval decisions to mere weeks, citing the success of Operation Warp Speed during the Covid pandemic when workers raced to curb a spiralling death count. Read more on this and the AI facilitated pregancy in the side bar

In the SA Medical Journal, AE Daryanani and JM Ehrenfeld explore how AI is being used in healthcare, the barriers preventing its seamless integration, and the governance structures needed to ensure its responsible deployment.

They write:

Unlike traditional medical tools, AI is not static. It learns, adapts and sometimes fails in ways even its creators struggle to predict. When AI‑generated recommendations contradict a physician’s clinical intuition, the challenge is not just about trust but about accountability.

Physicians must navigate a new layer of complexity where decisions are influenced by systems that do not always provide clear reasoning. Additionally, poor integration of AI tools with electronic health record (EHR) systems may compromise usability and further increase physicians’ clerical and administrative workload, a known contributor to burnout.

Where AI makes an incorrect diagnosis or a flawed recommendation, the issue of liability remains unresolved. Should responsibility fall on the physician who relied on AI, the developer who built the algorithm, or the hospital that implemented the system?

These are not hypothetical dilemmas. They are unfolding now, in real hospitals, affecting real patients. Beyond these technical concerns, AI is reshaping the human dynamics of medicine – but if not handled carefully, there is a risk medicine could drift toward a system where physicians are viewed more as interpreters of algorithmic outputs than as independent decision‑makers.

Ensuring AI augments rather than diminishes the physician’s role will be critical to maintaining the integrity of medical practice. Medicine now faces a defining question. Will AI alleviate burdens or introduce new ones? Will it empower physicians or de-skill them?

The answers will shape the next era of healthcare, not just for doctors but for the patients whose lives depend on them.

Advances

Incredible advancements have been made in recent years in the development and implementation of AI‑powered tools to enhance disease detection. Aidoc, an Israeli technology company founded in 2016, has been a pioneer and leader in AI‑enhanced radiology tools, and holds the largest suite of FDA‑cleared algorithms in a single proprietary platform.

Aidoc’s detection algorithms aim to accelerate the diagnosis of time‑sensitive and time‑consuming pathologies, like pulmonary embolism, intracranial haemorrhage, acute abdominal findings, and aortic dissection, to improve patient outcomes.

Their wide portfolio also facilitates increased detection of incidental findings and spans numerous applications across neurovascular, chest, cardiothoracic, breast, abdominal, and musculoskeletal radiology.

Furthermore, independent clinical studies have validated their AI tools with a high degree of diagnostic accuracy and clinical usefulness.

Beyond radiology, AI‑driven diagnostic applications are advancing rapidly across various medical specialties. One of AI’s distinct advantages is the ability to analyse and integrate multiple patient data sources, such as medical imaging, laboratory test results, EHR data, and vital signs, among others, at a very large scale to assist healthcare providers in identifying and diagnosing diseases faster and more accurately.

Predictive analytics and risk assessment

AI models have also shown significant promise in forecasting disease progression, hospital readmission risks, and treatment outcomes. For instance, AI‑driven sepsis prediction systems, analysing EHR data and continuous vital signs, have been shown to reduce mortality rates by enabling early detection, personalised treatment, and real‑time monitoring of sepsis patients.

Researchers are also leveraging traditional, non‑invasive diagnostic tests in novel ways, using AI to detect patterns and correlations that were previously undetectable to clinicians.

Another promising area of AI application is clinical documentation automation, including medical note-taking, insurance processing, and clinical trial enrolment. Addressing administrative burden through automation was the top area of opportunity according to 57% of physicians in a recent American Medical Association (AMA) survey, while large language models (LLMs) like ChatGPT and purpose‑built AI‑facilitated clinical documentation tools, such as DeepScribe and DAX Copilot, have been shown to reduce the administrative burden on physicians, both by improving documentation quality and saving time.

Greater adoption by health systems and seamless EHR integration could further enhance physician efficiency, allowing for more direct patient interaction, and potentially reduce burnout due to high administrative loads.

However, results have been inconsistent, with significant provider‑to‑provider variability. AI scribes that summarise patient‑provider interactions hold promise, but challenges include hallucinations, omission of critical patient details, and a lack of complex medical reasoning.

As a result, constant fact‑checking is required, making performance gains uncertain.

Challenges and risks

While AI is already demonstrating value in diagnostics, predictive analytics, and administrative automation, its widespread adoption is not without hurdles. As mentioned, AI is not static; it learns, adapts, and sometimes fails, introducing uncertainty and risk.

Although physician enthusiasm for AI is rising, with 66% of physicians now using AI tools compared with 38% in 2023, concerns over clinical reliability, liability, ethical implications, bias, workflow integration, and regulatory gaps have also increased.

Addressing these concerns is critical to ensuring that AI serves as an asset rather than a liability in modern healthcare.

AI is not only transforming how physicians diagnose and treat disease, but also reshaping the fundamental nature of patient interactions. While it can improve efficiency and augment clinical decision‑making, 39% of physicians worry it may negatively affect patient interactions.

As AI‑generated recommendations become more common, there is concern they could erode patient trust in their providers. Additionally, physicians are concerned about cognitive overload, as they must interpret AI‑driven insights while maintaining direct patient engagement.

To ensure AI enhances rather than disrupts care, thoughtful implementation and constant evaluation are necessary. The AMA advocates for ‘augmented intelligence’, a conceptualisation of AI that emphasises its assistive role rather than autonomy.

This human‑centred approach aims to preserve the integrity of the physician‑patient relationship, enhance clinical outcomes, and support provider well-being.

Legalities

As AI takes on a greater role in clinical decision‑making and administrative processes, it introduces complex legal and ethical questions that merit careful consideration. The most pressing issue is liability: who bears responsibility when AI makes an incorrect or harmful recommendation?

Liability in the use of AI remains largely undefined, but it could fall on multiple parties – the physician who relied on the AI system’s suggestion/prediction, the hospital or institution that deployed the system, or the AI developer that built it.

The AMA supports a risk‑based approach, meaning that accountability falls on the entity best positioned to mitigate risk or harm. For example, if an AI developer used flawed data, they should be liable; if a hospital failed to implement AI systems correctly, liability should fall on the institution; if a provider misused an AI tool in a way different to its intended use case that led to harm, they should bear responsibility.

Bias

Bias in AI models remains a persistent challenge. AI is trained and learns from historical data, so is inherently shaped by pre‑existing – and often unrecognised – biases. This might exacerbate healthcare disparities rather than eliminate them.

Without proper safeguards, AI tools risk producing skewed recommendations that disproportionately affect certain populations.

Governance structures and regulatory frameworks must evolve to keep pace with AI’s rapid advancements. Collaboration between physicians, medical organisations, AI developers, and regulatory agencies is essential to establish standards that prioritise patient safety, transparency, and clinical efficacy.

Ensuring responsible AI integration will require proactive oversight, continuing validation, and physician‑led implementation strategies to harness AI’s full potential while safeguarding medical ethics and public trust.

Physician trust in AI is also a critical determinant for its long‑term success in healthcare.

The AMA’s Physician Sentiment Report on AI use found that while adoption is increasing, many physicians remain cautious, with 47% of them prioritising stronger oversight as a key requirement for building trust in AI tools.

Physicians are uniquely positioned to guide AI’s integration into clinical workflows, ensuring AI aligns with the realities of patient care rather than introducing additional complexities.

Regulatory clarity is essential to AI’s future in medicine, but given AI’s rapid evolution and expanding use cases, this presents unique challenges. The AMA advocates for standardised evaluation frameworks, similar to those used for drugs and medical devices, to assess AI’s safety, efficacy and clinical utility.

As legislative efforts progress, the demand for stricter regulations and greater transparency will be particularly important for high‑risk AI applications that have a direct impact on patient care.

The success of AI in healthcare will ultimately be determined by how well it integrates into clinical practice, regulatory frameworks and ethical guidelines.

Prioritising physician involvement, patient‑centred implementation, and robust oversight is essential to ensuring it fulfils its potential to enhance care while preserving the core values of medical professionalism.

AI is no longer a distant concept but an active force shaping modern healthcare. It holds immense promise to positively impact health and well-being for humanity.

However, its long‑term impact depends not on technological advancements alone, but on how it is governed, trusted, and integrated into medical practice.

It must be evidence‑based, ethically sound, and transparent to ensure it enhances rather than disrupts healthcare delivery. The transition from traditional medicine to AI‑assisted care must be guided by physician leadership, regulatory oversight, and a steadfast commitment to patient welfare.

Ensuring it remains an assistive tool rather than a substitute for clinical judgment will require continuing validation, bias mitigation and ethical safeguards.

Education and training will also be critical in preparing physicians to responsibly implement AI tools, and as it becomes critically embedded in medical decision‑making, healthcare professionals must be equipped to critically evaluate its recommendations, advocate for ethical deployment, and maintain accountability in patient care.

AI will not replace physicians, but physicians who effectively use AI will be better positioned to lead the future of medicine. Ensuring it remains a trusted, explainable and accountable tool is essential to maximising its benefits while upholding the highest standards of medical practice.

A E Daryanani,1 MD;
J M Ehrenfeld,1,2,3 MD, MPH

1Advancing a Healthier Wisconsin Endowment, Medical College of Wisconsin, USA
2Department of Anaesthesiology, Medical College of Wisconsin, Milwaukee, USA
3American Medical Association, Chicago, USA

 

SA Medical Journal article – AI in medicine: Hype, hope, and the path forward (Creative Commons Licence)

 

See more from MedicalBrief archives:

 

Healthcare data protection in a mushrooming AI-driven sector

 

AI algorithms in diagnosis could harm patients – Dutch study

 

AI changing radiology, but not replacing human input

 

AI chatbots outstrip doctors in diagnoses – US randomised study

 

Growing role for AI in everyday medical interactions

 

 

 

 

 

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.