Financial services – the strategic approach to Artificial Intelligence – The Financial Conduct Authority

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
On 22 April 2024, the FCA, Bank of England (BoE) and Prudential Regulation Authority (PRA) (the Regulators), published their responses to the government’s White Paper on regulating the use of AI and machine learning (ML). The BoE and PRA issued a joint response and you can read our blog on that here. We consider the FCA’s position below. To cut the longer story short, the Regulators are working consistently with each other and are supportive of the government’s pro-innovation approach.
It seems that Regulators continue to anticipate no mandated or prescriptive rules based, legislative, approach to the use of AI or ML. This approach is based on the idea that it would limit the flexibility and agility of Regulators exercising their oversight and supervisory functions as they seek to provide workable solutions in this fast changing and developing area.
Instead, the Regulators have been empowered to embrace the benefits and address the risks faced by the financial services sector in a principles driven and outcomes focused way that is more responsive and better suits the needs of the markets as a whole. The key words applicable to the general approach are ‘responsible’ and ‘safe’ such that beneficial innovation is enabled to push forward in a way that maintains financial stability, trust and confidence. This, of course:
The FCA published its response to the White Paper, ‘AI Update’, on its website. The FCA’s approach is aligned with that taken by the BoE and PRA: it is technology agnostic and considers that the existing regulatory framework can support AI and ML innovation in ways that will benefit the financial services sector, and the wider economy, and assess and address the risks effectively. In the same way as the BoE and PRA do, the FCA highlights the need for the regulatory response to be agile and proportionate, and rooted in the foundational principle of strong accountability.
The Regulators are all investing heavily to ensure that they can support the safe development of AI both for regulated entities and for their own use, where it has already made a transformational difference in the speed with which they can detect scams and sanctions breaches. The FCA has developed its Regulatory and Digital Sandboxes, TechSprints and a dedicated FCA digital hub staffed with data scientists. It also cooperates with regulators in other sectors; of note here is the Digital Regulation Co-operation Forum (DCRF), which is a collaboration between the FCA and the Information Commissioner’s Office, Ofcom and the Competition and Markets Authority.
The FCA is aligned with the BoE and PRA in relation to the government’s ‘five principles’ which are key to the regulation of AI in the UK, and which are: (1) safety, security and robustness; (2) transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. We have written in more detail about these principles in our blog on the BoE and PRA response which you can read here. Some of the key FCA specific requirements, in the context of these principles, are as follows:
Over the next year the FCA will be focusing on exploring the benefits and the risks of AI and ML in the context of its statutory objectives which are to: (1) protect consumers; (2) protect and preserve the integrity of the UK’s financial system; and (3) promote effective competition in the financial services markets, by:
To conclude, the FCA is committed to continuing to develop its approach to data and technology with the aim of becoming an ‘innovate, assertive and adaptive regulator’. It is using AI to aid its development of tools that will better protect consumers and markets, using synthetic data to contribute to innovation; using ML to fight online scams; and working with surveillance specialists to develop and test AI-powered solutions that might one day be able to identify complex types of market abuse and other financial wrongs. The FCA must continue to be a technical expert on matters relating to financial services and has invested enormously in its own innovation and technology functions and skillsets as well as supporting firms to enhance innovation in the markets. The FCA is collaborating closely with other regulators both domestically and internationally to ensure consensus and alignment on best practice and on future regulatory developments.
“AI can make a significant contribution to economic growth, capital market efficiencies, improve consumer outcomes as well as good regulation. This requires a strong regulatory framework that adapts, evolves and responds to the new challenges and risks that technology brings. We believe that we have the right foundations, collaboration and supervision in place. Continuing to ensure the safe and responsible deployment of AI in UK financial markets in the interests of consumers and the markets we regulate, is a priority for the FCA.”