How much do we really know about the use of and exploration of AI in financial services?

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The use that is being made of AI in financial services is evidently quite the secret. This is a place of hush about commercial crown jewels, about not being quite sure of the regulatory opinion on what established principles look like in this brave new world, about not having clear information about what other organisations are doing, and possibly about the risk of reputational damage and regulatory sanctions if something were to go wrong.
The back-office
We know from published surveys, including those undertaken by the regulators, that AI is out there and in use for cybersecurity purposes, in customer service functions, in sales and marketing, in research and development, and in other ‘back office’ functions, but there is more than a sense of apprehension around, and a lack of any real detail on, real-world commercial applications.
To get to the bottom of why this is and to find out more about what is really going on, we are reliant on anecdotal information, on discussions with colleagues, on conversations with industry bodies and on the expert-led financial services news.
Concept testing
In the news earlier this week, there was a feature about AI concepts being explored by a financial services firm. Concepts are obviously quite different to ‘use cases’ but this is where it all starts and this most recent news rings true with other news and information that is available. What is going on in financial services firms seems to revolve around cautious testing and steady iteration, allowing issues that crop up to be ironed out. This testing seems to start out with working out the problem that needs to be solved, and is followed by an initial test phase where no real data is used, then possibly by moving to a bigger and better test phase that might include some real data, and then possibly ‘going live’ against some kind of implementation roadmap.
Key questions
One of the first questions that firms are addressing as they start their AI journeys is whether the problem that they need to solve is one that actually needs AI. AI could be a sledgehammer to crack a nut. Something simpler might work. If AI is needed, what kind of AI is needed? Firms then have to think, and implement care and caution, around a multifaceted mesh of issues including organisational risk tolerance levels, their ability to maintain human influence over their chosen AI, accountability, AI-generated mistakes, their inability to predict the outcomes of their AI, other ‘black box’ problems, skills gaps, and the highly sensitive nature of the source data.
Market analysis
A recent study about the use of AI in banks by Evident (an organisation that tracks and produces analysis on how banks are using AI (and which will be expanding to the insurance sector later this year)) suggests that the playing field is currently dominated by large multinational banks (led by US banks but with UK and EU banks gaining ground). This study focuses on a part of the financial services sector where investment resources are possibly at their highest, which is traditionally risk-averse, where controls, risk management, and governance frameworks, and consumer trust are foundational elements. It examines the lessons learned, the best practices adopted and the balancing act of innovation and responsible deployment.
The big risks
The main risks cited for this sector, and there are a variety of them, include data risks, ethical risks, stability risks, cyber risks, third party risks, sustainability risks, HR risks, employee understanding, and the ability of governance, risk, and compliance frameworks to evolve to shifting risks.
First principles
The approach taken by the surveyed banks is a responsible approach and starts with the ‘first principles’. These principles include establishing accountability, ensuring transparency, anticipating regulatory requirements, and upholding ethical commitments and operational standards. These principles then need to be translated, by each bank, into systems that provide structure and actionable controls for their own particular business environment. The principles must be embedded into responsible practice, into the design and testing stages, into deployment, into monitoring, and throughout the entire lifecycle of any use case, in a way that is nimble and agile so that it can evolve against emergent risks and into the future.
Tone from the top
This is not about transformations happening in silos. This is about driving the right culture and the right knowledge through entire organisations, embedding AI skills throughout organisational talent, developing skills dedicated to AI, fostering cross-functional collaborations, and empowering leaders with specific responsibility for AI. This is about evolving AI capability in environments where the impact on the end user and the trust that they will need to have in these systems in order to continue using them, and where the views of the regulators, is front of mind.
Best practice development
The work being done by the leading banks is likely to influence and develop the shape of the criteria applicable to other entities in the financial services sector, providing guidelines for them to mirror in their own evolution, and possibly to gain traction throughout the sector as good practice guidelines as the sector works out and establishes responsible ways to operationalise AI and become quicker, safer, more efficient, and eventually able to unlock competitive advantages.
Does the tortoise win the race?
Is this the art of going slow to go fast? Is this where early stage caution, given the highly regulated nature of the industry, the need for careful handling of foundational issues, the availability of sandbox and other testing facilities, will enable financial services firms to reach the pace required to remain viable in an AI driven world, but with the appropriate guardrails in place around the new and emergent risks? Once reached, this could be the place where the competitive advantages of AI for financial services could really be unlocked?
Regulatory focus
The financial services sector regulators and the government are focused on this space. We have just finalised our response to Treasury in relation to their Call for Evidence and will be monitoring closely for output in relation to that as well as from other regulatory initiatives that took place earlier this year and which seek to unlock the advantages of AI for the financial services sector.
If you would like to discuss how current or future regulations impact what you do with AI, please contact me, Tom Whittaker, or Martin Cook. You can meet our financial services experts here and our technology experts here.
Banks are traditionally risk-averse organizations. They operate in a highly regulated and competitive industry, where establishing (and maintaining) consumer trust is paramount. As such, they maintain robust technology control, risk management, and governance frameworks – including model risk management, operational resilience programs, and regulatory compliance structures. These guardrails were built over many decades to help financial institutions adapt to the latest technological innovations and regulatory expectations. However, AI poses new risks.
https://evidentinsights.com/reports/evident-responsible-ai-report-2025?id=ee70baf26d