AI regulation in the UK: Government White Paper published

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The UK Government has published a White Paper setting out how it proposes to regulate Artificial Intelligence (AI).
This article covers the following key points organisations looking to procure, develop and deploy AI need to know:
The proposed regulatory framework applies to the whole of the UK. It does not change the territorial applicability of existing legislation relevant to AI (including, for example, data protection legislation). It does not seek to address wider societal and global challenges related to the development and use of AI, such as access to date, compute capacity, and balancing the rights of content producers and AI developers (or, for example, proposals for an AI Convention).
Organisations, public and private, looking to procure, develop and deploy AI systems need to be aware:
The objectives of the regulatory approach are to:
The 'essential characteristics' of the regulatory regime are:
The above is a statement of the Government's intent and direction, relevant to regulators and industry. The UK Government is aware of the EU's proposed AI Act. Concerns have been raised about the impact of the EU AI Act on start-ups (for example, in this survey). The UK has actively chosen to pursue a different approach. For example, the UK Government's policy paper in July 2022 (the forerunner of the White Paper) said that setting a 'relatively fixed definition' of AI was, in their belief, not the right approach for the UK.
How is AI different to other technologies so that a specific regulatory framework is warranted? The two key characteristics of AI systems, arising on their own or together, are:
The existence of one of the above, or a combination, means that it can be difficult to explain, predict or control the AI system outputs, and challenging to allocate responsibility for its operation and outputs.
The aim of using characteristics is to future-proof the framework against unanticipated new technologies but, if needed, will be adapted.
The regulatory framework is context specific. It focuses on the outcomes AI is likely to generate in particular applications. There will not be rules or risk levels for entire sectors or technologies. Existing regulators will implement the proposed regulatory framework. The justification is that existing regulators understand their sectors, are best placed to conduct AI risk assessments, and determine how existing regulations should be applied or adapted.
This is in contrast to the EU's approach under the EU AI Act which, amongst other things, will introduce 'horizontal' regulation which cuts across multiple sectors (click here for a one page flowchart to navigate the EU AI Act).
The effectiveness of the proposed regulatory framework will depend, in part, on the regulators. The White Paper recognises that existing regulators and organisations vary in how much work they have done to adapt existing regulations to AI. For example: the FCA are due to report on their consultation into AI in financial services and are active in examining responsible AI; the ICO has published guidance on AI and data protection. However, regulators also have differing levels of capability to understand AI, including: the technology; its use cases; and impacts on business models.
Existing regulators will be expected to implement the framework using five 'values-focussed' cross-sectoral principles. These build on the OECD's AI principles, although do not mirror the language exactly.
The principles are intended to be applied by regulators proportionately. This suggests that regulators should focus on the AI systems and uses of the highest risks (similar to how the EU AI Act sets out different obligations and restrictions for AI uses of different levels of risk).
The principles are also intended to be applied so as to complement existing regulation and be in accordance with existing laws and regulations. Regulators, individually or potentially collectively, will produce guidance on how the principles apply and what best practice looks like. Examples of this already exist, such as the NHS AI and Digital Regulations Service offering a simpler 'shop front' for those they regulate. It is possible for principles to conflict with each other and with other regulation; again, regulators (individually and collectively) will need to consider what is appropriate in the circumstances.
The principles will be issued on a non-statutory basis. That provides government with flexibility to change them further to monitoring and evaluating their use. However, the White Paper notes that some regulators have 'expressed concerns that they lack the statutory basis to consider the application of the principles.' New laws remain a possibility. A potential new duty 'requiring regulators to have due regard to the principles' is mooted.
The following table sets out those principles, how they are defined and explained, and factors that the white paper expects regulators will want to consider when implementing or providing guidance about those principles.
Principle | Definition and explanation | Factors regulators may wish to consider when providing guidance / implementing |
Safety, security, robustness | AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed. Regulators may need to introduce measures for regulated entities to ensure that AI systems are technically secure and function reliably as intended throughout their entire life cycle. | Provide guidance about this principle including considerations of good cybersecurity practices and privacy practices. Refer to a risk management framework that AI life cycle actors should apply. |
Appropriate transparency and explainability | AI systems should be appropriately transparent and explainable. Transparency refers to the communication of appropriate information about an AI system to relevant people (for example, information on how, when, and for which purposes an AI system is being used). Explainability refers to the extent to which it is possible for relevant parties to access, interpret and understand the decision-making processes of an AI system. An appropriate level of transparency and explainability will mean that regulators have sufficient information about AI systems and their associated inputs and outputs to give meaningful effect to the other principles (for example, to identify accountability). An appropriate degree of transparency and explainability should be proportionate to the risk(s) presented by an AI system. | Set expectations for AI life cycle actors to proactively or retrospectively provide information relating to: the nature and purpose of the AI; the data being used and information relating to training data; the logical and process used; accountability for the AI and any specific outcomes. Set explainability requirements, particularly for high risk systems. |
Fairness | AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Actors involved in all stages of the AI life cycle should consider definitions of fairness that are appropriate to a system’s use, outcomes and the application of relevant law. Fairness is a concept embedded across many areas of law and regulation, including equality and human rights, data protection, consumer and competition law, public and common law, and rules protecting vulnerable people. | Interpret fairness for their sector and decide when it is important and relevant. Design, implement and enforce appropriate governance requirements for fairness If a decision involving AI has a legal or significant effect on an individual, consider whether AI system operator needs to provide an appropriate justification. |
Accountability and governance | Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle. AI life cycle actors should take steps to consider, incorporate and adhere to the principles and introduce measures necessary for the effective implementation of the principles at all stages of the AI life cycle. | Determine who is accountable for compliance with existing regulations and principles. Produce guidance on governance mechanisms. |
Contestability and redress | Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm. | Guidance on where to direct a complaint or dispute by those affected by AI harms. Clarify interactions with requirements of appropriate transparency and explainability, acting as pre-conditions of effective redress and contestability |
For each of the above, regulators should also consider the role of technical standards (except contestability and redress where this is not mentioned) . |
Government intends to establish mechanisms in place to coordinate, monitor and adapt the regulatory framework. What these look like in practice is not yet known; the government intends to publish further information in the next year (see What's next?).
The White Paper sets out what to expect in the short and medium term:
AI regulations are coming. We are actively involved with them (for example, we responded to the government's July 2022 policy paper). If you would like to respond to the consultation or discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.
I believe that a common-sense, outcomes-oriented approach is the best way to get right to the heart of delivering on the priorities of people across the UK. Better public services, high quality jobs and opportunities to learn the skills that will power our future – these are the priorities that will drive our goal to become a science and technology superpower by 2030. The Rt Hon Michelle Donelan MP, Secretary of State for Science, Innovation and Technology.
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach