This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
At the end of last week, I joined Westminster Business Forum’s Policy Conference looking at “The next steps for AI in UK financial services”. It brought together a collaboration of experts who covered a range of deeply interwoven topics and tackled some of the biggest questions currently pressing on the sector relative to AI deployment. Here are my takeaways, starting with those that came last of all in the conference timings, but which might be of most interest to the industry right now - the observations of the regulator.
No new regulations from the FCA but clear indications on what is key about regulations we already have
Governance becomes capability: AI literacy at senior level is not optional, it is fundamental and boards need to have real competence, they need to understand the AI systems that they are using, they need to understand the data on which they depend, and they need to understand how their models operate;
Senior accountability requirements: These requirements are already clear and will apply to AI, firms must be able to evidence the competency of their people, if something goes wrong they must be able to explain how harms have occurred and how to prevent them occurring again, and those who are responsible will be held to account;
Third party risks: Dependency mapping is essential, firms must ask themselves what their alternatives are if a model fails, and they will need to be able to switch to a back-up plan very quickly; and
Consumer protection is non-negotiable: Firms must deliver good outcomes to their consumers, they must be able to demonstrate that accountability exists, and that good outcomes are designed into their systems.
Fresh regulatory insights:
In terms of ongoing collaboration with industry, it was interesting to hear the FCA note that:
Good and bad practice: Published insights illustrating good and bad practice based on learnings from the sandboxes are coming;
A knowledge academy: Knowledge learned from the sandboxes will be scaled out to industry; and
Fitness for purpose: The adequacy of the UK’s current principles-based regulatory approach for evolving technology will be kept under review.
Some of what the FCA touched on in its address had been discussed throughout the earlier sessions. Strong themes from the insights shared include these:
Leadership:
To ensure that AI is transformative and not an isolated experiment, firms need to ensure that:
CEO: AI is a CEO priority;
Leadership by intent: CEOs must lead their firms through change to ensure that it scales with the right level of intent; and
Strategy: Corporate strategies must allow for innovation in a healthy and secure environment.
Accountability:
To repeat the regulator’s messaging, those who are responsible will be held to account if something goes wrong:
Responsibility: Though there may be difficult legal questions still to answer about where responsibility rests as between vendors, firms, and others, from a regulatory perspective, accountability rests on the firm;
No delegation of responsibility: If a firm takes the decision to deploy an AI tool, that firm is responsible for it, that responsibility cannot be passed on to the tech vendor (whether that is right, is a separate question (and for no doubt much discussion)), this means it is vital for firms to understand the model they are being sold, do their due diligence, have the right data in the right place, and have the right contract in place; and
An example of a strong model: A strong model of accountability should include a clear owner, clear limits, and a way to effect a hard stop, this might include (1) a risk appetite set by the board, (2) a suitably established risk committee, (3) named SMF’s from end to end, (4) an executive level governance body, (5) the three lines of defence (owning systems, setting standards, and testing) and (6) a kill switch (certainly for high risk systems).
Risks:
The top-ranking (but by far means not the only) concerns include:
Data privacy: Whether the source data is fit for AI purpose;
Concentration: Reliance on a few data providers creates single points of failure, we have an AI ecosystem that is highly concentrated;
Outputs: AI outputs represent a new form of risk;
Explainability: Black boxes are not helpful, firms need to think about explainability from the outset and embed it;
Fake news, mis- and dis-information: The industry faces key risks including fake media accounts and confabulations;
Cyber: AI has created new opportunities for cyber-attacks and new challenges for cyber resilience; and
Weakness in the current regulatory regime: There are questions around whether the current regulatory framework can manage the risks and complexities of AI, there are grey areas, firms are looking for more guidance with deployment, we are yet to see the designation of significant third parties, and it is not yet clear how responsibilities will be distributed between financial services and technology firms.
Skills spectrum:
While there is much scaremongering about the potential for AI to remove the need for humans in the workforce, there are important and very human considerations to be made around:
Old dog new tricks: An AI-driven world will require people to find new skills and embrace the need to retrain and reskill, humans need to be elevated and not necessarily replaced by AI, this change and development is cultural as well as technological, firms need to address nervousness about job losses and address the question of how to ensure the successful adoption of AI through the workforce, how to use AI in their business and how to ensure that people are confident and skilled to do so;
Graduates and juniors: Careful thought needs to be given to the early careers space to ensure that those who enter the workforce benefit from a secure and well-rounded preparation for their career (for example, the taking of minutes in some professions is one often regarded as crucial for the development of certain skills), school-leavers and graduates will still need to have strong foundations in the things that make us human (including learnings in key subjects such as history, politics, classics, and psychology);
Literacy: AI literacy is vital, firms need to ensure that the right levels of learning are happening throughout the organisation, creating learning opportunities and training pathways enabling everyone who needs and wishes to learn to do so; and
Human oversight: Firms need to accept that AI will always require human oversight, complete automation would have the potential to do enormous damage to a firm’s reputation and to the wider markets, the question is one of balance and where to keep humans involved.
FOMO and some words of wisdom:
If the current AI-hype feels a bit too much, and FOMO is maybe setting in, firms could do well to take a step back and not feel compelled to ‘jump on the bandwagon’. A few words of wisdom from those who have worked with machine learning for some time, before it was at the height of fashion:
Maturity: What firms decide to do with AI should depend on their AI-maturity and AI-readiness, the excitement of doing something new and the pressure to not be left behind are not necessarily helpful, a firm should only proceed when it is ready to do so, innovation should not be reckless;
Data: Data is the fuel for AI, AI cannot happen without data, a firm’s data governance will give an essential view of its data, with the benefit of that view firms need to assess whether or not they can progress safely;
Expertise: Firms will need to assess whether they have the right people with the right skills, there is a full spectrum of skills that firms will need to employ or retain to understand and deploy AI, there are currently large gaps between people who are skilled with AI and those who are not, if AI is going to be widely adopted then there must be wider levels of skills and expertise, without a broad skills base there will not be enough people who can spot things when they go wrong;
Transparency: Transparency with customers is very important, firms will need to be very clear about where they are using AI, and will also need to preserve access to humans so that customers who value and need human contact can exit AI loops and access a human where needed;
Humans make mistakes too: Humans can make mistakes, but we should not let AI mimic human failings; and
Start small: When firms do make a start on their AI journey it should be a small start with ‘no brainer cases’ that can be built at scale.
Governance:
AI demands a fresh view on governance, and a mindset shift, good governance is not a blocker - it can make good things possible and safely so:
Generation of content and action taking: The unique features of AI demand a fundamental shift in governance;
Prompts: Prompts become a new form of data and in themselves require governance;
Box ticking: Box ticking is a thing of the past, governance must be linked to real outcomes, it must be cross-functional and in real-time;
Policies: Policies are still needed but they cannot sit still on pages;
Decisions: Final decision makers need to be human; and
Treat AI like a new employee: Governing a new AI system could be seen as analogous to supervising a new employee - they need to be trained, kept on track, checked on and monitored, and all with an awareness that their behaviour could well change over time.
Infrastructure:
The underlying digital infrastructure will be key to powering the transition from experimentation to clear business value:
Policy needs attention: Policy is needed to address a number of issues relevant to critical national infrastructure, including access to hardware, alternative chip ecosystems, data centre readiness, sources of power and connectivity to cloud services, there will be demand for secure facilities that offer the performance of hyperscale with the right level of privacy protection;
Regulatory environment: Financial services must deliver the correct balance between innovation, resilience, and security; and
Investment: There is significant inward investment into the UK with a current focus on the south of the country, there is likely to be a strong pull towards existing data sources, and towards London and other areas with significant transport links.
Trust:
The financial services sector continues to suffer from low levels of consumer trust and there are AI associated risks that could easily exacerbate this:
Inaccuracies in data: Firms may still have lots of inaccuracies in data (for example, in credit files), if these are not spotted and fixed before they are subject to AI, they could cause significant harm;
The underserved: Traditional financial services has created a system full of unfairness, firms (and the wider industry) need to think about what AI that they deploy might do for those who are already excluded, millions live in vulnerable circumstances and are excluded from financial services, for these people correctly deployed AI could solve many issues; and
Human touch: There are some things that technology cannot replace or replicate, including humanity and thoughtfulness, firms need to think carefully about this before deploying AI to ensure the best outcomes can be delivered.
Conclusions:
As the financial services sector awaits the outcome of The Mills Review, two final thoughts, both of which informed our response to that call for input:
consumer understanding about how AI affects them will be key for AI to thrive and to maintain trust; and
from a regulatory perspective, it will be critical that AI outputs are explainable and that there is a person who is responsible for it (other relevant and related regulatory frameworks all push explainability and accountability to the highest levels).
And, to end the conclusions, a quote (hopefully not a mis-quote) from one of the expert speakers:
“You are playing with people’s money – you cannot afford to go wrong”.
Credit:
I hope that I have not done a disservice to any of the expert speakers and credit their original thoughts and contributions upon which I have based this summary, speakers included:
Lord Ranger of Northwood, Professor Bonnie Buchanan, Rachael Annear, Joanna Biggadike, Eleni Coldrey, Tanya Retter, Chris Davies, Kate Pender, Dr Rohit Dhawan, Sue Daley, James Chalinor, Aman Luther, Fraser Dear, and Sonia Luthra.
Our thought leadership:
You can subscribe to our monthly financial services regulation update by clicking here, clicking here for our AI blog, and here for our AI newsletter. You can meet our financial services experts here and our AI experts here.