This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The headline news for AI in financial services this week is that the FCA is not planning to introduce new regulations specifically for AI. The regulatory position is confirmed, for now, and is that existing rules and frameworks apply and are considered to “mitigate many of the risks associated with AI.”
The FCA highlights two key existing regulatory frameworks for firms to have front of mind as they consider whether to deploy AI and, if so, how:
Consumer Duty: Financial services products, services, communications and support must meet the needs of customers; and
Senior Managers and Certification Regime ("SM&CR"): senior managers are accountable for the safe use of AI.
Most responses were received from non-regulated firms (52 out of the 67 responses), with only 15 regulated firms responding to the Engagement Paper.
Of the responses received from regulated firms, this was the demographic (diagrams taken from the Summary, credit to the FCA):
The FCA gives a similar diagram for the responses from non-regulated firms:
The overall tone of the feedback on Live Testing ("LT") is positive, with the following key potential benefits and opportunities of LT being highlighted:
enabling understanding of how AI performs in real-world conditions;
providing regulatory clarity and filling skills gaps;
bridging knowledge and guidance gaps relative to how to operationalise and measure AI;
redefining assurance to meet the demands of AI;
overcoming “first-mover reluctance” and encouraging safe experimentation;
encouraging firms to bring innovations to the market;
navigating regulatory and industry challenges collaboratively; and
developing shared technical understanding on complex and novel issues.
A recurring theme of the feedback is the complexity of AI deployment:
AI is not likely to be ready “out of the box”;
bespoke training data is likely to be required;
contingencies need to be built in;
software crashes and errors must be catered for; and
monitoring needs to be put into place.
Explainability is cited as a key factor relative to complexity and a driver of hesitancy for regulated firms because the lack of it makes it difficult to understand, verify and audit AI, and this creates challenges for oversight and governance.
Consumer impact is highlighted as a key priority with a specific reference to vulnerable customers and the need to ensure that “AI deployment delivers genuinely fair, accessible and beneficial outcomes without inadvertently creating new forms of exclusion or harm” and to AI models that "influence financial decisions or customer outcomes."
The responses note numerous practical challenges around performance and evaluation with many difficult questions to resolve around testing, monitoring, eliminating the possibility of harmful outcomes, determining appropriate levels of accuracy of output, stress-testing, assessing readiness for deployment and the ability of models to hold up to real market conditions once released from the laboratory.
Data is noted as a key factor, with the number one challenge being that of ensuring that there is “sufficient and high-quality data….for robust AI model testing and optimisation." The black-box problem is another key data-related issue for financial services firms.
Implementing appropriate governance frameworks to manage AI risk is proving challenging due to a number of factors, including the difficulty of structuring and organising accountability around emerging issues and emerging risks, the lack of any agreed framework, the time required to assess risks and determine risk appetites, ambiguities in SM&CR, the need to incorporate AI risks into operational resilience processes, the lead time required to build “senior leadership confidence” and continued “market skepticism about AI reliability in financial contexts”.
Transparency is a key issue with particular challenges around the expectations that might be required of third party suppliers. Third parties may be resistant to providing detail on significant factors such as the underlying logic, algorithms, provenance of training data, testing and evaluation undertaken, error margins and model updating methods. This lack of transparency creates potential difficulties for regulated firms in explaining decision-making, in ensuring good governance and compliance, and in assessing and mitigating risks and liabilities.
The LT application window remains open until Monday next week, 15 September. The FCA and the first cohort of participating firms will start working together next month. A second LT application window is likely to open before the end of the year. Although transparency with the wider industry is key to the LT initiative, it is unlikely that much in the way of reporting of information to the wider market will happen before an initial 12-month period has passed.
The Summary closes with an analysis of where respondents highlighted the need for more assistance from the FCA, in addition to LT, in their AI adoption journeys. There are four broad themes that define this widely drawn need:
Model evaluation and validation: the development of standardised, regulator-acknowledged, performance benchmarks; data availability and sharing; more outcomes focus; comprehensive model assurance and regulator-monitored incident reporting; incorporating environmental impact metrics; and graduated risk-based regulatory requirements.
Scenario testing: market crisis simulation stress-testing; safe-harbour for certain disclosures (such as failures) in return for a proportionate regulatory response; and assistance in transitioning from test to real-world scenarios.
Fairness: clarification of the regulatory expectations around unintended discrimination and responsible bias mitigation.
Collaboration: promoting a culture of shared experimentation and shared learning; publication of success stories and case studies; joining-up with similar international initiatives to support interoperability; and the publication of research and thought-leadership.
The FCA commit to considering the suggestions received as they press on with LT and continue their work in supporting the financial service sector to adopt AI safely and responsibly. It is clear that this is going to be no small undertaking.
You can read more updates like this one by subscribing to our monthly financial services regulation update by clicking here, clicking here for our AI blog, and here for our AI newsletter.
If you would like to discuss how current or future regulations impact what you do with AI, please contact me, Tom Whittaker, or Martin Cook. You can meet our financial services experts here and our technology experts here.
Our regulatory approach is principles-based and focused on outcomes. We want to give firms flexibility to adapt to technological change and market developments, rather than detailed and prescriptive rules.
We do not plan to introduce extra regulations for AI. Instead, we’ll rely on existing frameworks, which mitigate many of the risks associated with AI.
We believe that with a fast-moving technology like AI, this is the best way to support UK growth and competitiveness.