AI regulation in the UK: Government response to White Paper

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The UK government has published its response to the AI Regulation White Paper consultation, which outlines its pro-innovation regulatory approach to AI. The UK Government published the White Paper in March 2023 (see our blog post with an overview of the White Paper and separately our response to the consultation). The response sets out an overall approach based on cross-sectoral principles, a context-specific framework, international leadership and collaboration, and voluntary measures on developers. The UK Government has also paved the way for legislation on AI in the future when the risks of AI become more apparent, with a particular focus on general purpose AI systems.
Here we summarise the key points to know and what to expect next.
Broadly, the response reaffirms the UK government’s commitment to the five cross-sectoral principles outlined in the White Paper and confirms the intention to work alongside existing regulators to help them deal with the challenges posed by AI.
Those cross-sectoral principles are:
The response highlights how several regulators are already taking steps in line with the principles-based approach set out. The UK Government has contacted regulators impacted by AI requesting them to publish an update outlining their strategic approach to AI by 30 April 2024. This is expected to include:
The strategies outlined by regulators will assist with informing the government on whether there are potential weaknesses in the current framework which legislation could address.
Additionally, the UK Government has published new guidance for regulators to support them to interpret and apply the principles. This is intended to drive coordination between regulators in implementing the regulatory framework.
The response confirms that the UK Government will proceed with establishing a central function to assist with delivering the AI regulation framework. Steps already taken include the recruitment of a new multidisciplinary team to assess risk and a commitment to publishing an ‘Introduction to AI assurance’ to assist with building public trust in AI.
Notably, the response also highlights two methods to ensure AI best practice in the public sector:
One of the headline announcements was a commitment to invest over £100 million to support the development and regulation of AI. This can be broken down into:
This follows on from the £1.5 billion spend in 2023 building the next generation of supercomputers and is a clear acknowledgment that regulators need more funding to help them tackle the risks posed by AI.
A large section of the response is dedicated to discussing the challenges posed by highly capable general purpose AI systems. The response acknowledges the substantial risks posed by the fact these models can be used in a wide range of applications across different sectors and may not fall neatly within the remit of any regulator and, more broadly, the context-based approach advocated for in the response.
The UK Government suggests ‘AI technologies will ultimately require legislative action in every country once understanding of risk has matured’. The response stops short of making any sort of commitment to introducing legislation in the UK instead focussing on the role of voluntary measures in mitigating against the risks posed by these models. Any future binding measures would ensure developers adhere to the principles set out above and would only be introduced if ‘existing mitigations were no longer adequate’. In the short term the government will continue to consult relevant stakeholders throughout 2024 to assess how the regulatory framework is working and to develop understanding of the risks posed by AI in different sectors.
The UK government lists the actions it intends to take during 2024, including:
The response reflects the government's ambition to become the international standard bearer for the safe development and deployment of AI, while also harnessing its potential to boost the economy and transform public services. The response acknowledges the need for an agile regulatory system that can adapt to emerging issues and challenges posed by AI but leaves the door open for legislation in the future.
Although the UK has adopted a lighter touch approach to AI regulation compared to the EU, businesses should not delay in planning how to manage the real and significant risks which AI presents. A range of legislation and regulation already exists in the UK which impacts how AI is procured, developed and deployed. As outlined above many regulators are already taking action within their domains, and businesses should be ready to adapt and respond to updated guidance or strategies which focus on sector specific activities. Businesses operating across multiple jurisdictions will need to prepare for the imminent arrival of the EU AI Act.
If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact David Varney, Tom Whittaker, Liz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com).
This article was written by Sam Efiong.
The UK AI market is predicted to grow to over $1 trillion (USD) by 2035 – unlocking everything from new skills and jobs to once unimaginable life saving treatments for cruel diseases like cancer and dementia. My ambition is for us to revolutionise the way we deliver public services by becoming a global leader in safe AI development and deployment. - The Rt Hon Michelle Donelan MP, Secretary of State for Science, Innovation and Technology.