This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

Scaling AI diagnostic tools in the NHS: Balancing innovation and privacy considerations

Picture of Amanda Leiu
Passle image

Introduction

The UK government has recently announced the development of a new cloud-based artificial intelligence research screening platform (‘AIR-SP’) which will allow AI diagnostic tools to be tested on an unprecedented scale across the NHS. The platform is backed by nearly £6 million in government funding and is being built by NHS England to enable trusts across the country to join AI screening trials to help boost early diagnoses. 

Currently, there are several operational barriers within NHS Trusts which hinder the testing and deployment of AI screening tools. 90% of AI tools being trialled by the NHS remain stuck in pilot phases due to over-reliance on temporary IT setups in each individual trust. This means that AI tools are piloted on a Trust-by-Trust basis. It also means that even if one tool is deemed effective by one Trust, other Trusts must start the process of testing from scratch and set up new databases to access images generated by AI. 

The new NHS wide platform is intended to support the uptake of new diagnostic tools quickly, safely and at scale, by supporting multiple AI tools in a single environment that will have secure connections to all NHS trusts. This is expected to dramatically cut down the time and costs associated with rolling out AI research studies.

It will first be used to support nearly 700,000 women across the country taking part in a historic National Institute for Health and Care Research (NIHR)-funded trial, identifying changes in breast tissue that show possible signs of cancer and referring them for further investigations if required. 

Privacy implications for organisations

Using patient data to trial AI diagnostic tools raises important data protection concerns for NHS Trusts and other organisations involved in the development and deployment of AI tools. Health data is a particularly complex area within the UK’s data protection regime. Health data is classified as ‘special category data’ under the UK General Data Protection Regulation (UK GDPR), the Data Protection Act 2018 (DPA) and the Data (Use and Access) Act 2025. As such, the processing of health data is subject to heightened obligations, reflecting the sensitivity and potential impact on individual rights if such data is not handled in accordance with data protection laws. 

Organisations will need to carefully consider their role when processing patient data in the context of developing and deploying AI screening tools. While NHS Trusts often act as data controllers, AI developers may not always be processors. In complex AI supply chains, joint controllership may apply. The ICO’s guidance on generative AI (see here) published in January this year encourages joint controller arrangements where suitable and emphasises that an organisation’s controller status is determined by factual influence over the purposes and means of processing, rather than what the relevant contract says.

The use of patient data in the development of new diagnostic tools provides additional challenges as there may be multiple purposes for processing the data and / or these may change over time. For example, scientific research does not always begin with a single, clearly defined research question: there may be multiple questions being investigated, or the focus may evolve over time.  The legal basis for processing patient data will need to be reviewed in these cases, or if the data is used for a different purpose (e.g. in the delivery of patient care).  Consent is unlikely to be a suitable lawful basis for using patient data to train AI diagnostic tools, as consent must be specific to the intended use and updated patient consent must be sought for each new line of research. There are also practical challenges with managing withdrawal where consent is relied on. Where patient data is used for research, the appropriate legal basis for processing is likely to be either public health or research purposes. Depending on which condition is relied on, an appropriate policy document may be required in some cases.

Organisations will need to ensure compliance with data protection laws during each phase, including during the design phase. The use of AI to analyse patient health data will be considered high risk processing, and so a Data Privacy Impact Assessment (DPIA) must be carried out before implementing AI-based tools to assess risks across the lifecycle, including design, testing, and implementation phases. DPIAs should address not only privacy risks but also broader ethical concerns, including potential bias in training data and discriminatory outcomes.

Organisations should carefully consider what is the minimum amount of data required for developing a particular diagnostic tool, and how to best protect patient privacy. This may include measures such as limiting the data used to the minimum required for diagnostic purposes, only uploading pseudonymised data to databases, or incorporating synthetic data to help provide patients with anonymity.

The development of reference databases for AI diagnostic tools from patient data will also raise concerns about data ownership and the use of proprietary data. Ownership of patient data is maintained by patients themselves. However, the NHS is the legal custodian of that data. Careful consideration as to how patient data may be used will need to be given in all licence agreements with development partners, including clearly defined permissible uses and restrictions on data sharing. Any commercialisation of developments of AI diagnostic tools based on data from NHS trials, including their data sets, will trigger the need for further review.

Key takeaways

The announcement of the new AI research screening platform represents a significant step towards delivering on ‘Plan for Change’ and contributing to the shift from analogue to digital healthcare. By providing a secure and streamlined platform for accessing NHS data, the service is expected to expedite the uptake of new diagnostic tools and streamline patient care for the benefit of patients and ultimately, the wider healthcare system. However, it is crucial that the implementation of this platform is accompanied by robust data protection measures and governance to ensure the confidentiality and security of patient data. Its success will depend on the implementation of rigorous data governance frameworks, clear role allocation and ongoing stakeholder engagement.

For advice on the deployment of AI tools, including data protection and contractual considerations, please contact Martin CookMadelin Sinclair McAuslandAmanda Leiu or a member of Burges Salmon's Commercial & Technology team. 

This article was written by Harriette Alcock, Victoria McCarron and Amanda Leiu.