Ethics and governance of artificial intelligence for health: guidance on large multi-modal models

Publication date 18 January 2024 Category
  • Artificial Intelligence
  • Digital
  • Ethics
  • Health Policy
  • Human Rights
  • Security
Publishing organisation World Health Organisation

On 18 January 2024, the World Health Organization (WHO) released updated guidance on the ethics and governance of large multi-modal models (LMMs) in healthcare. This new guidance addresses the rapid growth and application of generative AI technologies in healthcare, offering over 40 recommendations for governments, technology companies, and healthcare providers. It emphasizes the responsible use of LMMs to enhance health outcomes and safeguard population health.

LMMs, capable of processing various data types like text, videos, and images to generate diverse outputs, are noted for their ability to mimic human communication and perform unprogrammed tasks. These models have seen rapid adoption, with applications ranging from diagnosis and clinical care to medical education and scientific research. However, the guidance also points out risks, including the potential for generating misleading or biased information and the challenges of ensuring equitable access and cybersecurity.

The WHO underscores the need for a collaborative approach among stakeholders, including governments, the private sector, healthcare providers, and civil society, in the development, deployment, and regulation of LMMs. Key recommendations for governments include investing in ethical AI infrastructure, enacting laws and regulations that uphold human rights and ethical standards, and establishing regulatory bodies for assessing AI applications in healthcare. Additionally, the guidance calls for inclusive and transparent AI design, engaging various stakeholders and prioritizing well-defined, reliable tasks to improve health systems and patient outcomes.