This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

What can 115 experts tell you about the FCA’s latest thinking on AI in financial services?

Picture of Kerry Berchem
Passle image

It has been a while since I was at the FCA’s AI Sprint earlier this year, and it is good to see that more information from the regulator about its activity in this space is now percolating down to the markets. This recent blog from the FCA gives some insight into the format of the AI Sprint and the angles that were being considered by the 115 experts who were in that room. 

The big four themes of the AI Sprint were: 

  • Regulatory clarity: the need for the FCA to help firms to understand precisely how the existing regulatory requirements apply in the AI context;
  • Trust & risk awareness: the need for us to be able to trust AI as a pre-requisite to deploying it;
  • Collaboration and co-ordination: the need for ongoing collaboration between regulators (both in the UK and internationally), government, firms, developers, users, academics etc. This collaboration will help the sector to deal with the uncertainties that it needs to iron out. Data is currently a big uncertainty and on this there is a forthcoming roundtable at which the FCA and the ICO will meet together with representatives from the various interested industry bodies to examine the challenges in this space; and
  • Safety through sandboxing: the need to continue developing and making available safe space testing environments that will help innovators with the tools that are needed to drive the safe adoption of AI in financial services.

There is more detail about the AI Sprint on the FCA’s website including the time periods considered in the discussions (the vision was from now and into the future in chunks of five and ten years), the use cases considered, the conditions considered to be essential to enable safe and responsible adoption, the enablers of safe adoption, the regulatory requirements needed, and what good AI driven consumer outcomes might look like. 

The FCA seems clear that it does not wish to create new regulations to deal with AI in financial services but that more work is needed to make it beyond doubt that the existing regulations are fit for the function of achieving this. The FCA seems to consider that the UK has all the ingredients needed for the perfect recipe for AI to drive growth and better consumer outcomes, and to acknowledge that what is needed to make that recipe work is clarity. 

The AI Sprint is described in the recent blog as “just the beginning”. Although we know that the regulators (not just the FCA) have been looking at and using AI for years, things continue to move at pace and the regulators are continually evolving their understanding of deployment in the markets against the latest evidence and expertise. The AI Sprint provided valuable insight into the FCA and this insight is now being used by the FCA to shape its future work. The FCA is currently still working through that insight, informing itself on the future-scape of the regulatory approach to AI in financial services and considering how to create the right environment for growth and innovation. 

What else is the FCA doing? The AI Spotlight (a showcase of real-world experiments with AI in financial services) remains open, the FCA is expanding its AI Lab and Supercharged Sandbox to “accelerate innovation” with the provision of greater computing power, enriched data sets, and increased AI testing capabilities (into which firms are invited to collaborate and experiment), it is developing its safe testing environments, and it is continuing to engage with other regulators to address the major issues and inconsistencies that are causing hesitancy in the sector.

While we wait for the regulator to provide the much-anticipated clarity, what can firms do? In a previous post I asked ‘did the tortoise win the race?’ and in thinking about the ancient fable from where this question originates, it seems perfectly acceptable to acknowledge that the current eagerness to innovate being tempered with a hesitancy around how to do so and how the current regulations apply to AI, is not necessarily a bad thing given what the financial services sector has at stake. In any steps that firms take, and smaller firms can leverage ‘second mover’ advantages from the bigger ones who might have deeper pockets, some good old-fashioned common sense and some good old-fashioned and established principles of good governance will go a long way to making an AI journey a successful one. A few thoughts:

  • establish a solid governance framework around your AI intentions;
  • build teams of real experts around your AI;
  • identify your security weaknesses and set up protocols around the risks you face;
  • understand the job that you want AI to do and choose suitable AI for this job;
  • understand how your AI works i.e. know how it makes decisions, know how to break it, know how to fix it, understand and be able to explain its outputs;
  • test and monitor your AI often; and
  • have trust in your AI before you deploy it.

Regulatory focus

If you would like to discuss how current or future regulations impact what you do with AI, please contact meTom Whittaker, or Martin Cook. You can meet our financial services experts here and our technology experts here.

You can read more thought-leadership like this by subscribing to our monthly financial services regulation update by clicking here and clicking here for our AI blog and here for our AI newsletter

It was the perfect introduction to think about the FCA’s role in AI, and how the regulatory framework we already have in place can support the flourishing of AI in financial services

https://www.fca.org.uk/news/blogs/ai-through-different-lens-what-115-experts-taught-us-about-ai-innovation