back to top
Thursday, 12 June, 2025
HomeArtificial Intelligence (AI)Healthcare data protection in a mushrooming AI-driven sector

Healthcare data protection in a mushrooming AI-driven sector

With data breaches on the rise worldwide, and with South Africa having suffered its own significant cyber-attacks, noticeably in the healthcare sector and most recently, on the government’s National Health Laboratory Service (NHLS), legal experts urge a strengthening of existing systems and the adoption of ethically sound practices and adaptable infrastructure to prevent further occurrences.

In the SA Medical Journal, Attorney Safia Mahomed writes that this would, additionally, require robust governance that is specific to the SA context.

She writes:

The concept of keeping health data private is constantly being tested, as what constitutes this has grown significantly, now including massive amounts of personal information from various sources.

Collectively termed ‘biomedical big data’ (BD), these sources comprise a health data ecosystem that has altered the landscape of health research. BD provides a natural blueprint for artificial intelligence (AI) to thrive and to generate and advance knowledge exponentially.

However, data breaches are escalating, specifically in the healthcare sector, and the upward climb in local data breaches underscores the urgent need to translate paper into practise by strengthening systems and enforcing the ethico-legal framework governing the processing of data in SA, including ways in which to efficiently handle its misuse.

Africa under attack

Healthcare sector data breaches accounted for 32% of all data breaches between 2015 and 2022 – almost double the number recorded in the financial and manufacturing sectors.

Additionally, during the second quarter of 2023, the healthcare sector had an average of 1 744 attacks per week, a significant year-on-year increase of 30%. Africa experienced the highest average number of weekly cyber-attacks per organisation – 2 164 attacks – a significant year-on-year increase of 23% compared with the same period in 2022.

A 2023 briefing by the SA Council for Scientific and Industrial Research (CSIR) reported that South Africa was the eighth most targeted country worldwide for ransomware.

Not long after the CSIR released its report, all of the internal and external information technology (IT) systems of the NHLS remained down, after an attempted ransomware attack by the BlackSuit hacking group that reportedly stole 1.2 terabytes of data (equivalent to about 30 large moving boxes full of documents or the contents of 25 000 books), including third-party, client and patient information.

This article aims to address the connection between artificial intelligence (AI) and data for healthcare in South Africa while recognising both are fundamentally linked. It further identifies that the increased demand for data has highlighted vulnerabilities and while there exists current legal protection to safeguard data, it considers whether these are sufficient and translatable to our local context.

Can the use of data be controlled?

As AI systems require data to succeed, a question is whether the use of such data can be sufficiently controlled. Weeks before news of the NHLS ransomware attack, information was released that Informa, parent company of academic publisher Taylor & Francis, had signed a $10m data-access agreement with Microsoft.

The AI partnership agreement gives Microsoft “non-exclusive access to Advanced Learning Content” across Taylor & Francis’ nearly 3 000 academic journals.

After the initial access fee of $10m, Informa said it would receive recurring payments for the next three years.

The agreement allows Microsoft to train its AI models on Taylor & Francis’ extensive catalogue of scholarly publications.

An immediate concern was not that such a partnership agreement took place, but rather that authors were not informed about the bulk sale of their research, including how their research will be republished and cited by the publisher’s AI tools.

Informa’s half-year results published during July 2024 confirmed a second major partnership with another AI company, and AI-related revenues are expected to exceed $75m for the year.

The examples of the attempted NHLS ransomware attack and the Informa-Microsoft data access agreement point to the fact that the more data required, coupled with more complex technology, the more we need to understand the vulnerability of our systems and have greater control over how our data are used.

AI and data in healthcare

The concept of keeping health data private is constantly being tested, as what constitutes this has grown significantly, now including massive amounts of personal information from sources such as genomic data, radiological images, medical records, and non-health data converted into health data.

These ‘biomedical big data’, comprise a health data ecosystem that has altered the landscape of health research. The global push for interconnection, enabled by open science, open access and open data, has an impact on data-sharing for research purposes.

AI pilot projects had already been deployed in Africa during the mid-1980s, so its use in healthcare is not completely new, but what is new is the development of normative documents comprising principles and guidance for ethical and socially responsible AI.

It is worth noting that the ‘global consensus’ regarding the development of normative documents to guide ethically and socially responsible uses of AI does not always incorporate the African perspective.

Although AI application in low- and middle-income countries (LMICs) may be limited, due to varying factors, digital health technologies are already widely used in LMICs for data collection, dissemination of health information by cellphones and extended use of electronic medical records on open-software platforms and cloud computing (among others).

A 2023 Nature survey of 1 600 researchers worldwide revealed an overall positive sentiment regarding the increasing use of AI tools in scientific research, but there were concerns about how AI is transforming research, including issues related to bias, fraud and irreproducibility.

And just when we are beginning to grapple with issues around the ethical management of AI in healthcare, new developments emerge, like Artificial General Intelligence and Artificial Superintelligence. These are advanced AI systems that meet or exceed the skills of human experts.

Although theoretical for now, we must prepare for AI programmes that can independently interpret data and perform reasoning tasks without human intervention at performance levels exceeding those of human experts.

We must consider that AI may become responsible for traditional clinical tasks like diagnostics, data-driven decision-making, and elements of cognitive empathy, and may outperform humans.

Legal landscape

Presently, the right to privacy is a fundamental right protected under section 14 of the Constitution (1996). Also, the rights to confidentiality and privacy in the health context are further safeguarded in various laws and policy documents.

The National Health Act 61 of 2003 (NHA) provides for the broad protection of patient privacy and confidentiality, while the Protection of Personal Information Act 4 of 2013 (POPIA) is the most significant piece of legislation to consider where the processing of personal information is concerned.

Of significance is section 71 of POPIA, containing a general prohibition against the processing of personal information by automated means taken without human oversight or intervention.

Regarding the transfer of data, section 72 provides an added layer of protection. National data transfers may take place with informed consent and appropriate ethics review.

International transfers may take place under five circumstances, three of which appear relevant for research purposes; however, only one ground appears to be practical, which is when the recipient in the foreign country is subject to a law, binding corporate rules or binding agreement providing for an adequate level of protection that upholds principles substantially similar for the processing of personal information (section 72(1) of POPIA).

A binding contractual agreement, e.g, a data transfer agreement (DTA) that upholds the principles for the processing of personal information as set out in POPIA, seems to provide a realistic solution for the transfers of personal information outside our borders.

Exploitation

But while we have progressed in developing laws, policies and guidelines geared towards privacy protections, we must remember the lessons learned from the historical exploitation on the continent.

What we might benefit from:

1. Efforts to ensure data security, implementing strong access control measures, developing wide-ranging data security policies, adopting advanced security technologies to protect patient data, shutting down systems immediately in case of an intrusion, removing compromised files, and preserving details of the breach for investigation which should be incorporated into a comprehensive incident report plan.
2. A national DTA template to manage data transfer across SA borders.
3. Introducing a framework to regulate AI, speaking to the best interests of all people.
4. Engaging communities and community healthcare leaders to accelerate Fourth Industrial Revolution (4IR) technology adoptions and education, to familiarise population groups with existing technology and future expectations.
5. Upskilling research ethics committees (RECs) to equip members to deal with protocols involving BD and AI ]
6. Establishing clarity around data ownership.
7. An obstacle to AI development in Africa is data availability and the costs associated with its acquisition: directed investment including capacity building at national and regional level, development of digital infrastructure, and accessibility to internet (use and coverage) is crucial

While privacy governance in SA has progressed rapidly, and the country has made notable progress towards establishing a comprehensive ethico-regulatory framework as a foundation, designed to safeguard the privacy and confidentiality of patients and research participants’ data within the healthcare sector, the challenge lies in effectively implementing privacy protections, balancing safeguards against the growing need to flourish in the era of open science.

AI advancements present considerable challenges, and the rise in local data breaches underscores the urgent need to translate paper into practice by strengthening systems and enforcing the ethico-legal framework governing the processing of data in this country, including how to efficiently handle its misuse.

Safia Mahomed, BCom, LLB, LLM, PhD, Department of Jurisprudence, School of Law, University of South Africa.

 

SA Medical Journal article – Data privacy and protection in AI-driven healthcare (Creative Commons Licence)

 

See more from MedicalBrief archives:

 

SA has highest percentage of human error healthcare data breaches – report

 

NHLS system still faltering as cyberattacks hit global healthcare

 

Key health service units targeted by hackers

 

Hackers target Mediclinic staff data

 

Cover-up claims over NHS data breach affecting thousands

 

 

 

 

 

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.