Saturday, 4 May, 2024
HomeArtificial Intelligence (AI)WHO issues AI regulatory list

WHO issues AI regulatory list

The World Health Organisation (WHO) has released a new publication listing key regulatory considerations on artificial intelligence (AI) for health.

The publication emphasises the importance of establishing AI systems’ safety and effectiveness, rapidly making appropriate systems available to those who need them, and fostering dialogue among stakeholders, including developers, regulators, manufacturers, health workers and patients.

With the increasing availability of healthcare data and the rapid progress in analytic techniques – whether machine learning, logic-based or statistical – AI tools could transform the health sector.

The agency recognises the potential of AI in enhancing health outcomes by strengthening clinical trials; improving medical diagnosis, treatment, self-care and person-centred care; and supplementing healthcare professionals’ knowledge, skills and competencies.

For example, AI could be beneficial in settings with a lack of medical specialists, e.g. in interpreting retinal scans and radiology images. among many others.

However, AI technologies, including large language models, are being rapidly deployed, sometimes without a full understanding of how they may perform, which could either benefit or harm end-users, including healthcare professionals and patients.

When using health data, AI systems could have access to sensitive personal information, necessitating robust legal and regulatory frameworks for safeguarding privacy, security, and integrity, which this publication aims to help set up and maintain.

“Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation,” said Dr Tedros Adhanom Ghebreyesus, WHO director-general.

“This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks.”

In response to growing needs to responsibly manage the rapid rise of AI health technologies, the publication outlines six areas for regulation:

• To foster trust, the publication stresses the importance of transparency and documentation, such as through documenting the entire product lifecycle and tracking development processes.
• For risk management, issues like “intended use”, “continuous learning”, human interventions, training models and cybersecurity threats must all be comprehensively addressed, with models made as simple as possible.
• Externally validating data and being clear about the intended use of AI helps assure safety and facilitate regulation.
• A commitment to data quality, such as through rigorously evaluating systems pre-release, is vital to ensuring systems do not amplify biases and errors.
• The challenges posed by important, complex regulations – such as the
• lifecycles.

Because it can be difficult for AI models to accurately represent the diversity of populations, leading to biases, inaccuracies or even failure, to help mitigate these risks, regulations can be used to ensure attributes like gender, race and ethnicity – of the people featured in the training data – are reported and datasets intentionally made representative.

The new publication aims to outline key principles that governments and regulatory authorities can follow to develop new guidance or adapt existing guidance on AI at national or regional levels.

WHO guidelines

WHO guidelines (Open access)

 

See more from MedicalBrief archives:

 

Growing role for AI in everyday medical interactions

 

AI helps drugmakers slash clinical trial costs and time

 

ChatGPT diagnoses child’s illness after 17 doctors fail

 

Will AI make radiologists redundant?

 

 

 

 

MedicalBrief — our free weekly e-newsletter

We'd appreciate as much information as possible, however only an email address is required.