20 May 2024

The Medicines and Healthcare products Regulatory Agency (MHRA) has published its AI strategy, setting out how it intends to regulate the use of AI in healthcare products and use AI to improve its own processes. 

The MHRA strategy is in response to a Government request in February 2024 for regulators to explain how they will implement the AI Regulation White Paper. 

Under the UK approach, regulators will govern the development and use of AI within their specific sectors, without a dedicated AI regulator or general overarching AI legislation (as opposed to the EU's approach with the cross-sector EU AI Act). Please see our previous post explaining the UK’s approach to AI regulation set out in the White Paper and how this compares to the EU approach What does the HealthTech market need to know about AI law and regulation?.

The MHRA recognises the transformative potential of AI in shaping healthcare products and its own regulatory functions. The strategy commits to adopting the five key principles of pro-innovation regulation set out in the White Paper and taking a proportionate approach to regulating AI. As with its broader programme of medical device regulatory reforms, the MHRA points to its collaborations with global regulators to ensure best practice and harmonisation in its approach to AI regulation. 

The strategy does not reveal any new regulatory reforms, but builds on the existing change programme and the MHRA roadmap for regulatory reform of Software and AI as a medical device, first published in 2022 and updated in January 2024, which you can read more about in our previous post Health Tech Series: MHRA reveals timeline for new medical device regulations

Unlike the EU, where the EU AI Act imposes specific obligations on AI systems which will apply alongside and in addition to the medical device regulatory regime (EU MDR), the MHRA intends to regulate AI as software as a medical device, aligning any future definitions and approaches to regulation with those endorsed by the International Medical Devices Regulators Forum (IMDRF), where the MHRA is co-chair on the working group on AI medical devices. 

We summarise the MHRA strategy on AI below. 

How does AI apply within the MHRA’s regulatory responsibilities for medicines and medical devices?

The MHRA recognises that it is responsible for AI in three main areas:

  1. as a regulator of AI products;
  2. as a public service organisation delivering time-critical decisions; and
  3. as an organisation that makes evidence-based decisions that impact on public and patient safety, where that evidence is often supplied by third parties.

In respect of areas (2) and (3), the MHRA states that AI’s use will predominantly be targeted at increasing efficiency, such as by using AI to provide initial assessments of applications for medicine licences. This could bring significant advances by increasing the speed with which new medicines are approved and reducing time on failing products. 

What steps is the MHRA taking to adopt the principles detailed in the White Paper?

The Government’s White Paper set out the principles that should guide the use and development of AI. The table below summarises how the MHRA says their approach aligns with these principles. 


MHRA’s Comments

1. Safety, security and robustness

Existing UK regulations impose a wide range of safety, registration and assessment requirements on AI medical products. However, many AI products which would currently fall under Class I will be upgraded to higher risk classes once the new UK regime is introduced, requiring more products to have an independent conformity assessment, rather than be self-certified.

2. Appropriate transparency and explainability

The MHRA say existing requirements, such as those addressing labelling and the provision of information, align with this principle. However, MHRA recognises that describing the intended use of some software and AI products, and applying human factors to such devices, can be challenging and specific guidance is needed in these areas.

3. Fairness

The MHRA recognises issues of bias in devices and is encouraging manufacturers to adopt:

  • standard ISO/IEC TR 24027:2021, which provides techniques for assessing bias within AI programmes; and
  • the International Medical Device Regulators Forum (IMDRF) guidance document N65, which addresses clinical studies following a product’s launch in the market.

As part of STANDING Together, the MHRA has been working with international organisations to address bias and harms posed by AI.

4. Accountability and governance

The MHRA intends to bolster regulations that set accountability and governance obligations on manufacturers, conformity assessment bodies and the MHRA itself.

The MHRA’s recent guidance on Predetermined Change Control Plans aims to provide traceability and accountability on the performance of AI against intended uses. Compliance with this guidance is voluntary, but the MHRA plans to formalise this in new regulations.

5. Contestability and redress

In addition to the accountability measures above, concerns around AI medical products can be reported through the MHRA’s Yellow Card site. Manufacturers are required to submit incident reports through this site and the MHRA intends to strengthen these duties in relation to medical devices. The public can also submit any concerns on AI medical devices via Yellow Card.

The MHRA works with a variety of other national regulators where safety concerns are raised in relation to patient pathways.

What are the MHRA’s issued and proposed guidance on how organisations should comply with the White Paper’s principles?

The MHRA’s list of current and future planned guidance on AI is set out in the table below.

Already in place

Medical devices: software applications (apps)

Crafting an intended purpose in the context of Software as a Medical Device (SaMD)

Reporting adverse incidents involving Software as a Medical Device under the vigilance system

Good Machine Learning Practice for Medical Device Development: Guiding Principles

Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles


Good machine learning practice for medical device development

Best practiceAIaMDdevelopment and deployment– This guidance will address risks posed by the interface between AI and humans. It is expected to be published in spring 2025.

How is the MHRA assessing and managing risks posed by AI to the medicines and healthcare products sector?

The MHRA recognises that the Life Sciences sector is increasingly using AI to generate data and improve processes when developing medicines and medical devices. MHRA notes that there are no specific regulations around how such AI systems are used, saying it intends to rely on collaborations between international regulators and industry to develop best practice expectations and that existing regulations in areas such as pharmacovigilance provide assurance around the quality of data. The strategy notes that issues linked to the use of AI in developing healthcare products may arise due to bias, the human/device interface and safety issues, but gives little detail on how these issues will be addressed in practice. 

What initiatives is the MHRA implementing in its own processes?

The MHRA recognises that it is at the start of the journey in understanding how AI can improve their own regulatory processes, but that it will embrace opportunities and align its approach with wider Government strategies for adopting AI. 

If you would like to discuss any issues relating to the regulatory framework surrounding Health Tech and the use of AI, please contact a member of our Health Tech team.

This article was written by Rory Trust and Jacob Hall.

Key contact

Headshot of Rory Trust

Rory Trust Director

  • Healthcare
  • Defence
  • Public Sector

Subscribe to news and insight

Burges Salmon careers

We work hard to make sure Burges Salmon is a great place to work.
Find out more