New recommendations to increase transparency and tackle potential bias in medical AI technologies
December 21, 2024

New recommendations to increase transparency and tackle potential bias in medical AI technologies

Patients will be better able to benefit from innovations in medical artificial intelligence (AI) if a new set of internationally agreed recommendations are followed.

A new set of recommendations released The Lancet Digital Health and NEJM Artificial Intelligence Designed to help improve how data sets are used to build artificial intelligence (AI) health technologies and reduce the risk of potential AI bias.

Innovative medical AI technology may improve patient diagnosis and treatment, but some research suggests medical AI may be biased, meaning it works well for some people but not others. This means that some individuals and communities may be “left behind” when using these technologies, and may even be harmed.

An international initiative called STANDING Together issued the recommendations as part of a study involving more than 350 experts from 58 countries. These recommendations are designed to ensure that medical artificial intelligence is safe and effective for everyone. They cover many factors that can lead to bias in artificial intelligence, including:

  • Encourage the development of medical artificial intelligence using appropriate medical datasets that correctly represent everyone in society, including minority and underserved groups;
  • Help anyone publishing a medical data set identify any biases or limitations in the data;
  • To enable those developing medical AI technologies to assess the suitability of datasets for their purposes;
  • Define how artificial intelligence technologies should be tested to determine if they are biased and therefore less effective among certain populations.

Dr. Liu Xiao, associate professor of artificial intelligence and digital health technology at the University of Birmingham and lead researcher of the study, said:

“Data is like a mirror, reflecting reality. When data is distorted, it amplifies social biases. But trying to fix the problem by fixing the data is like wiping the mirror to remove a stain from your shirt.

“To create lasting change in health equity, we must focus on addressing the root causes, not just rethinking them.”

The STANDING Together recommendations aim to ensure that datasets used to train and test medical AI systems represent the full diversity of the populations for which the technology is intended. This is because AI systems often don’t work well with people who are not properly represented in the data set. People from minority groups are particularly underrepresented in datasets and therefore may be disproportionately affected by AI bias. It also provides guidance on how to identify people who may be harmed when using medical artificial intelligence systems, thereby reducing this risk.

STANDING Together is led by researchers from University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham, UK. The study was conducted with collaborators from more than 30 institutions around the world, including universities, regulators (UK, US, Canada and Australia), patient groups and charities, and small and large medical technology companies. This work was funded by the Health Foundation and the NHS AI Laboratory, with support from the National Institute for Health and Care Research (NIHR), an NHS, public health and social care research partner.

In addition to the recommendations themselves, a commentary published in Nature Medicine by STANDING Together patient representatives emphasized the importance of public participation in shaping medical artificial intelligence research.

Sir Jeremy Farrar, chief scientist of the World Health Organization, said:

“Ensuring we have diverse, accessible and representative datasets to support the responsible development and testing of artificial intelligence is a global priority. Stand Together recommendations to ensure AI moves forward in equity in health important step.

Dominic Cushnan, deputy director of artificial intelligence at NHS England, said:

“It is vital that we have transparent and representative data sets to support the responsible and equitable development and use of artificial intelligence. Stand Together’s recommendations are timely as we harness the excitement of artificial intelligence tools potential, and the NHS Artificial Intelligence Laboratory fully supports the adoption of their practices to mitigate AI bias.

The recommendations have been published today (18 December 2024) and are publicly available through The Lancet Digital Health.

These recommendations may be particularly helpful to regulatory agencies, health and care policy organizations, funding agencies, ethical review committees, universities and government departments.

2024-12-18 18:15:17

Leave a Reply

Your email address will not be published. Required fields are marked *