As 2025 draws to a close, my top ten takeaways from the year for financial services firms looking to deploy AI:
- AI-specific regulation is not coming: Do not wait for new AI-specific regulation from the financial services regulators, it is not coming. Think about the regulatory rules that already apply to you, including the Consumer Duty (meet the needs of your customers) and the Senior Managers and Certification Regime (your senior managers are accountable for the safe use of AI);
- Risks and outcomes: Focus on addressing the risks that your business faces and on the outcomes that your business needs to achieve;
- Governance is one of your risks: Good culture, good governance, and an ability to “do the right thing” in a nimble and agile way, are going to be critical components of accountability in our increasingly complex and rapidly changing digitally enhanced world. Be forward-thinking in your approach to culture and drive good conduct and behaviour around your core values;
- Aim to keep and build stakeholder trust in your approach: Trust is inextricably linked to your reputation and to your brand and will be key to secure, both in the technology used and in those who deploy it. Without trust the promised returns, including growth and wider societal benefits, will be unlikely to materialise;
- Collaborate: The answers to the complex problems that we face are interdisciplinary. Collaboration across the entire ecosystem between regulators, between regulators and industry, between sectors, between different expert skills-sets, and between businesses, will be vital to ensuring that the regulatory environment is agile;
- Experiment: While we are still in the experimentation phase, sandbox and other experimental environments are excellent vehicles for supporting collaboration and are likely to be a key part of the way forward in our ability to balance trust, innovation and opportunity;
- Hype: Experiment, adapt and learn, in a prudent and strategic way. Do not just jump on the bandwagon and buy and deploy a new technology because everyone is talking about it. Have front of your mind that many off-the-shelf models will not be suited to a highly regulated environment and, with speed and efficiency and other touted advantages, there will come risks with which you must deal to the satisfaction of the regulator. To ensure that you do not deliver a zero return, you first need a problem that AI can solve. Then you need to find suitable AI for your problem. Then you need a clear strategy to build your digital foundations, streamline your data, and move to the cloud. You need a timeline, a budget, a roadmap, and preparedness for the never-ending process of keeping ahead and making sure that everyone in your business is using the tools that you invest in and developing the skills and capabilities that are going to be needed to embed your AI and return measurable gains;
- Humans: AI can technologically enhance humans, but it cannot replace humans, who will remain vital as advanced technology integrates into everyday life. Trusted human relationships will remain critical in financial journeys especially at those times in life that are highly emotional and for big-picture understanding. To alienate consumers who value human interaction and involvement could be costly. Humans, significantly those with the right expertise, will be essential for ensuring clear accountability for and oversight of any AI that you deploy;
- Evidence: Many firms are still nervous about sharing details of their AI projects and there is little publicly available information. I have tracked Evident AI throughout 2025 and that gives valuable insights into the activities of the world’s leading banks. We can expect more next year in the form of the “AI in Financial Services 2030 Global Survey” which should provide the evidence needed to help more firms to move beyond pilots; and finally…
- ESG: The question of how much energy AI consumes will need to be addressed as will societal potential and peril. Models that can pack compute power, deliver efficiencies and save or utilise less energy, are likely to be more attractive to firms that have sustainability intentions and wish to build trust and reputation around those.
Race or reality?
Technology is moving significantly faster than the ability of financial institutions to fully grapple with the legal and regulatory risks and turn pilots into tangible results. The gap between the speed of development and the ability to adopt is evident in the industry’s head start on agentic AI before it has answered its own questions in relation to generative AI.
Contradictory news stories sometimes point to successful large scale deployment and at other times to significant barriers to deployment and adoption. Clear barriers still exist, for most firms, in relation to AI-specific policies, guidelines, and internal governance frameworks, and frictions around their ability to obtain straight-forward explanations from providers about how the models work and how safe firm data will be once imported.
Do the right thing
Doing the right thing is a strong regulatory theme. It is something that I wrote about recently in relation to enforcement outcomes (Do you know how to "do the right thing" (in financial services)? - Burges Salmon) and the importance of it as a cornerstone of good governance is critical to ensuring good outcomes in a firm’s regulatory journey, strong resilience to all manner of external factors, positive economic performance, and ultimately good outcomes for its consumers.
Good culture is very different to ‘compliance’ and to get it right, and avoid situations where things go wrong or turn them around in a positive way when they are starting to go wrong, firms need to think more about their sense of collective purpose and their shared values and intentions.
Ticking boxes is not an approach that will help firms to handle complex situations that need resolving quickly. However, a fully embedded and collectively shared sense of how to ‘do the right thing’ will empower a firm to respond, and be resilient and nimble, in the face of ever-evolving challenges. This is complex, hard, human and behavioural, but is no longer optional. It is necessitated by the unpredictable and dynamic nature of the technological systems under consideration and of the volatile environment in which firms are now operating.
As the themes around the deployment of AI in financial services have evolved throughout the year, the need for strong culture as driver of the kind of governance that will be robust enough and nimble enough to handle AI deployment and keep the regulator satisfied, has emerged as a clear foundational requirement.
2025: a sprint through the year
January was the month of the FCA’s AI Sprint. A collaboration of people with different skill sets taking a focus on AI now, in five, and in ten years’ time, through a regulatory lens. Not a month (or, in fact, a week) has since gone by when AI was not in the news. Here is an overview of some of the stories:
- In February, there was a call for evidence AI in Financial Services - a new Call for Evidence - Burges Salmon from the government as it started to appraise the deployment of AI in financial services;
- In March, IOSCO published a report into the rise of AI on the capital markets Would you like to know more about how AI is being deployed in capital markets around the world? - Burges Salmon;
- In April, Evident AI released a report about the steps being taken by the world’s leading banks How much do we really know about the use of and exploration of AI in financial services? - Burges Salmon and news came out from FCA informing the markets about the outcomes of the AI sprint in January What can 115 experts tell you about the FCA's latest thinking on AI in financial services? - Burges Salmon highlighting the FCA’s big four themes of regularity clarity, trust and risk awareness, collaboration and co-ordination, and safety through sandboxing;
- In May, the FCA launched its AI Live Testing initiative The FCA is launching AI live testing - could this be what the financial services sector has been waiting for? - Burges Salmon a scheme designed to overcome some of the prevailing challenges and help firms make the leap from proof-of-concept to market in their AI journeys by enabling them to collaborate with the FCA to test and evolve AI solutions before releasing them to the market. There was another report from Evident AI, this time focusing on the recruitment of AI talent by the leading banks AI’s Got Talent! - Burges Salmon, and some high-profile deepfake avatars hit the news Will the real Avatar please stand up? - Burges Salmon;
- In June, the FCA committed to developing ‘a statutory code of practice’ for firms developing or deploying AI and/or automated decision-making and demonstrated its commitment to helping more firms to experiment with AI as part of its AI LAB, recognising that smaller firms may need more support. To this end, it flagged its intention to hold a tailored roundtable with smaller firms later in the year. The FCA also announced its collaboration with Nvidia What support does the FCA offer to firms who want to test their AI ideas? - Burges Salmon;
- In July, the FCA gave a speech delivering insights into uptake for the Regulatory Sandbox, Supercharged Sandbox and Live Testing facilities that it has established to boost the deployment of safe and responsible AI in financial services AI innovation in financial services - the latest insights straight from the horse's mouth - Burges Salmon;
- In August, another update from Evident AI hit the inbox AI in financial services, so what's the latest? - Burges Salmon;
- In September, the FCA announced an update to its AI Live Testing initiative and confirmed that it is not planning to introduce new regulations specifically for AI AI Live Testing - the latest update from the FCA - Burges Salmon;
- In October, there was a key update from the Bank of England about the use of AI in financial services AI in financial services: an update from the Bank of England - Burges Salmon and about technological transformation more generally The latest from the Bank of England on technological innovation and the importance of strong foundations - Burges Salmon. We also had more insights from Evident AI AI in financial services, the latest insights from Evident - Burges Salmon; and
- In November, we had a report from the British Standards Institute on the role of trust in the deployment of AI British Standards Institute: promoting trust in AI - Burges Salmon which, although it was sector agnostic, made some highly relevant references to the approach being taken by the financial services sector.
An end of year update
There are a few themes that are worth rounding off the year as we look into the next one:
- AI Live Testing: The first cohort of firms to take part in the AI Live Testing programme has been named, with a second cohort to come later next year. These firms will now be working with the FCA to ‘develop, assess and deploy "safe and responsible" AI in the UK financial markets’. The initiative will help firms and the regulator to understand how AI could shape UK markets, inform the future regulatory approach, and address key issues and blockers in the way of progress. With evidence about deployment still being slow to emerge, no doubt many will be interested to hear real evidence from the work of the first cohort.
- Agentic AI: Agentic has enormous potential for the financial services industry where there are a range of uses to which it appears well suited, including generating reactive market research, analysing and investigating fraud patterns, interacting with firms on behalf of customers and initiating financial transactions. However, agentic presents peril in equal measure, exacerbating some already known risks and introducing new ones. Reports suggest around a third of banks are already piloting agentic AI with around half of those pilots expected to go live in 2026. This suggests a reality where one-click processes, like taking out a loan or mortgage, will complete at high speed and with all component processes taking place invisibly in the background.
- Working practices and post-adoption strategy: AI and other technologies are changing the way that we work with the potential to replace outdated legacy systems, streamline data pipelines, remove manual effort, protect from nefarious activity and boost defences against fraud. These developments will mean that business models will evolve, and the workforce will need to recalibrate. This looks like it could mean reductions in overall headcount. However, AI related roles have seen surges, and there are related skills shortages. These changes will demand a shift in skills and roles. What the new roles will be, what new skills the ecosystem will need people to have, what career pathways will look like, and the level of competition for talent that employers will face, is not yet clear.
- Friction points: AI is currently deployed mainly in places where it can accelerate back-office functions. However, the magic involved in turning newfound efficiencies into growth is still eluding many firms. Most firms still face pain points around data, legacy systems, vague objectives, risks around accountability, questions around explainability, and the availability of talent.
- Use cases: Directly applicable practical use cases are emerging and the track from proof of concept to market is likely to pick up pace through next year. We may see bolder steps in the direction of greater personalisation, increased financial inclusion, real-time decision-making, improved fraud detection, better chatbots, and the automation of complex processes. A couple of areas where real change might be expected include the provision of advice, where technology could be part of the answer to the longstanding issue of too few getting access to the advice that they need, and mortgages, where speed and efficiency in joining up the many tasks that make up an application could deliver groundbreaking developments.
Conclusion
Financial services firms that wish to deploy AI need to firmly embrace responsible corporate behaviour. The wins from the first round of available AI solutions are likely to be very different to the wins needed to deliver customer-facing applications that drive good outcomes for consumers as well as profits to business. So far, we have only scratched the surface. Over the next year or so, market-facing applications will likely become more numerous and will demand the oversight of skilled and competent humans who understand the risks and are prepared to take responsibility for them.
The culture that firms have, the governance and behaviours that they adopt in dealing with the challenges that lie ahead, are themselves a risk. Risk management is going to have to become much more proactive. Weaknesses in culture in a faster paced world are likely to cause failure cascades that are more rapid than we have seen before. This demands that the early warning signs, the red flags, and the toxic issues, must be surfaced and resolved with matching speed.
With the Consumer Duty and the Senior Managers and Certification Regime having been highlighted by the FCA as the regulatory foundations upon which firms should build their chosen AI capabilities, firms should expect that good outcomes for consumers and real responsibility for their business leaders are going to be the hard, concrete pillars in respect of which the FCA is going to want to see tangible evidence when it looks into how a firm has deployed AI.
You can read more updates like this by subscribing to our monthly financial services regulation update by clicking here, clicking here for our AI blog, and here for our AI newsletter.
If you would like to discuss how current or future regulations impact what you do with AI, please contact me, Tom Whittaker, or Martin Cook. You can meet our financial services experts here and our technology experts here.