New technology is always evolving the way we live and work, and the rapid development of Artificial Intelligence (AI) tools, in particular, is changing the workplace at pace, writes Dr Volker Hitzeroth, medico-legal consultant at Medical Protection.
We know doctors are curious about incorporating tools such as transcribing software into their practice to facilitate more efficient working, meaning they can spend more time engaging with their patients.
At Medical Protection, we’ve seen a growing interest in this area, and are keen to help doctors understand the medico-legal risks associated with using these tools.
AI transcribing programmes, sometimes referred to as ambient scribes or AI scribes, include advanced ambient voice technologies (AVTs) which record a consultation and transcribe this into written note form. This has the potential advantage of saving time taking written notes.
The Health Professions Council of South Africa (HPCSA) has recently published revisions to Booklet 20 – AI Guidelines on Ethical Use of Artificial Intelligence. While the updated guidance is generally encouraging of the use of AI tools, there are some key considerations doctors should be aware of when evaluating the use of ambient scribing in their practices.
The revised version notes that AI technologies have “led to notable changes in the delivery of healthcare” and that overall, the use of AI in providing healthcare “should minimise potential data-related harm… promote the equitable delivery of safe quality care and maintain the integrity of the practitioner/patient relationship”.
Doctors should always ensure the notes produced by a transcription software are an accurate reflection of the consultation and do not omit any potentially relevant information. It is also important that notes are proportionate and do not contain unnecessary information that may delay or detract from patient care.
The HPCSA guidelines place ultimate responsibility for use of AI on healthcare professionals, meaning important issues such as data protection and consent should also be considered when using such programmes.
Data protection
Before a clinician or practice starts using an AI system, they should also ensure they understand what personal data it uses, how it uses it, where the personal data is stored in situations where a third-party is involved in the processing, whether it is retained by the provider of the AI product or re-used in any way, and how the product can support meeting Protection of Personal Information Act (POPIA) and Promotion of Access to Information Act (PAIA) obligations.
In addition to compliance with existing legislation, the HPCSA’s guidelines state “anonymisation of data does not provide enough protection to a patient’s information when machine-learning algorithms can identify an individual… with as few as three data points”.
It states that medical practitioners have a responsibility to understand AI technologies to mitigate any risks to confidentiality.
To ensure the use of AI software is secure and in line with data protection legislation, we would recommend seeking guidance from a senior colleague, the relevant professional society, and the Information Regulator or The Cybersecurity Hub.
Consent
Any AI tools used in South Africa must comply with POPIA standards, which state that personal data can only be processed where the subject consents.
Individual practices should ensure they institute a policy regarding the use of AI, including when informed patient consent should be obtained. Medical Protection advises that informed consent should always be sought from patients before using AI tools that require the sharing of their personal data with a third party.
This consent can be obtained in various ways, including verbal consent during consultations, or a physical consent form provided before consultations. The consent given should be documented within a patient’s medical record. The HPCSA’s guidance on decision-making and consent provides further helpful guidance.
Critically, where patients decline the use of AI, the HPCSA’s guidance specifically states that patients “may not be disadvantaged or refused access to health services”.
Indemnity
Understandably, doctors may have questions about whether they can request assistance should medico-legal issues arise from their use of AI.
At Medical Protection, we appreciate the benefits that AI can bring as an adjunct to clinical practice, and recognise that members will be making use of this emerging technology.
Members can request assistance with matters arising from the use of AI systems, provided these are not a fully autonomous system and a human retains ultimate oversight or final decision-making authority.
As such, members can request assistance in the usual way where a clinical negligence matter arises from their use of AI software, provided the issue relates to the member’s own clinical judgment or actions.
This assistance would apply where Medical Protection is providing the member with indemnity for clinical negligence claims. Members can also seek assistance with other medico-legal matters, such as regulatory investigations, arising from the use of AI software in their clinical practice, in the usual way.
Medical Protection would not normally provide indemnity for issues relating to the failure of AI software itself – for example, if the software has been incorrectly programmed or developed.
Where a doctor is employed or working in a state hospital, indemnity for claims would usually fall to the hospital or provincial Health Department. Medical Protection would urge such doctors to clarify the indemnity situation with them.
Doctors must take care when entering into contracts or agreements with AI suppliers, and they should be cautious about agreeing to indemnify any third party against a claim.
Summary
In this rapidly progressing area, doctors should continue to work in a manner consistent with the relevant legislation and the HPCSA’s ethical guidance.
Doctors should contact Medical Protection or their Medical Defence Organisation if they have any medico-legal concerns around the use of AI.
See more from MedicalBrief archives:
Making AI usable, useful and safe – for clinicians and patients
The challenges of rapidly evolving AI in healthcare
