Artificial intelligence in financial services – challenges and risks

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The FCA and Bank of England have published a report on the conclusions of the Artificial Intelligence Public-Private Forum's (AIPPF) findings about the challenges and risks of the use of artificial intelligence (AI) in financial services, to advance the collective understanding of the use of AI in financial services, as well as promote further debate among academics, practitioners, and regulators about how best to support safe adoption of this technology.
The AIPPF sought to:
The AIPPF report considers three topics: data; model risk; governance.
This article looks at governance. But first, a quick couple of points on Data and Model Risk:
There are various governance issues associated with procuring, developing and deploying AI, and approaches to governance is naturally the subject of much scrutiny. Governance is a standalone issue in the UK National AI Strategy (which we wrote about here) whilst there is also debate on what good governance requires (in this article we wrote about recommendations for the UK's Office for AI's white paper on the UK Government's approach to AI regulation due early 2022).
Good governance is needed to ensure the safe adoption of AI in financial services. Risks of AI usage need to be identified and managed, which is especially important given the novel and dynamic application of AI. In a practical sense, challenges may arise when AI systems don't naturally align with internal operational or product governance functions (so making it difficult to have clearly defined lines of accountability and co-ordination). Further, and given the variety of potential uses of AI and contextual differences, a single, common approach is unlikely to be suitable for all financial services firms; what is good governance will depend on each firm and each AI system.
Whilst financial services firms are well used to identifying and managing regulatory, conduct/customer and operational risk - and adopting appropriate governance to manage such risks - there are some differences to bear in mind between AI management and other topics.
The key findings that firms should bear in mind when considering governance are:
Discussion about the safe adoption of AI has only just begun. But the report expects:
A quick disclaimer: the report's conclusions are based on views of individuals who are from, but not speaking on behalf of, various regulatory bodies and financial services firms. Nevertheless, the report is useful and worth a read to understand the key issues of deploying AI safely in financial services, and provides a level of detail that a summary like this cannot.
If you want to discuss the topics raised here please contact Tom Whittaker, Martin Cook or your usual Burges Salmon contact.
Artificial intelligence (AI) is increasingly used in UK financial services and Covid-19 has accelerated the pace of adoption. The technology can bring a range of benefits to consumers, firms, and the wider financial system; but there are also barriers to adoption and challenges. The use of AI in financial services may also create new risks or amplify existing ones. Therefore, financial services firms need to keep up with appropriate controls and focus on the resilience of their AI systems. At the same time, clarity from regulators about their expectations is a critical part of fostering innovation and may support safe adoption. Engagement between firms and regulators is one way to address these issues.
https://www.bankofengland.co.uk/research/fintech/ai-public-private-forum