This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website
Podcasts

Pensions Pod: Cyber and AI Bytes – The evolving landscape of AI in Pensions

Picture of Chris Brown
Website thumbnail S7E4

In this episode of The Pensions Pod: Cyber and AI Bytes, Chris Brown and Tom Whittaker discuss the evolving landscape of AI in the pensions sector, focusing on regulatory developments, adoption rates, and the understanding of AI among clients. They explore the implications of AI regulation in the EU and UK, the current state of AI adoption in the pensions market, and the importance of mitigating risks while maximising opportunities for trustees and pension providers.

Speakers

01
02

Chris Brown, Partner, Burges Salmon

Hello and welcome to our listeners. This is another episode of our spin-off series from the main Burges Salmon Pensions Pod, which we’re calling Pensions Pod Cyber and AI Bytes. I’m Chris Brown. I’m a Partner in the Pensions and Lifetime Savings team here at Burges Salmon, and I lead our AI advice to Pensions clients. For this bite-size episode, I’m joined by Tom Whittaker, who is a Director in our Dispute Resolution team and Head of our Advisory AI practice, and we’re gonna be updating you, the listeners, on the AI landscape and what it means for trustees and pension providers.

Right well, Tom, we’ve got three things we wanna talk about, so let’s get down to business. Number one, I suppose our first point is AI regulation continues to evolve, doesn’t it?

Tom Whittaker, Director, Head of AI (Advisory), Burges Salmon (00:57)

It does. And it does so at different speeds, in different places and in different ways. And so really, for most of the listeners, how it impacts them is going to be quite different depending on their circumstances. But there’s a few big themes that we can see.

So first of all, we can see from the EU that in some ways it wants to push ahead with AI regulation and ensure that fundamental rights are respected, but also seeing that regulation is key to innovation, because, unless you have safe and secure and trustworthy AI, then there’s a risk that you don’t have any of the benefits of AI at all. And so you see that they’ve introduced the Digital Omnibus package, which is yet to be agreed upon, but an attempt to simplify a range of digital legislation in the EU. There’s still some uncertainty as to whether that will kick in before some of the AI Act provisions are due to kick in. But certainly there’s a moving feast with some of the EU regulations there. However, from what I see in the market, many people are still gravitating towards what the EU standards are, anticipating that they will become law at some point in some way, or at the very least present some form of gold standard, which will then be relatively interoperable with other jurisdictions around the world. Now, compare and contrast briefly with the UK’s position, we see that the UK still doesn’t have an AI bill. We’ve had the occasional announcement suggesting it will go out to consultation, but then ultimately, what’s that? Tumbleweed. That’s the phrase I’m looking for. And instead of what you see from the UK government is a series of announcements. Say for example, the AI Opportunities Action Plan earlier in the year, talk about AI growth zones, the first one is underway, there are more to come. And then also potentially growth lapse, that’s subject to consultation, all about potentially changing some regulation to try and spur on innovation. And then of course, with the UK, you have plenty of big announcements around investment, both in data centres, infrastructure, talent, access to compute and more as well. So, the UK government is taking a very different approach to all of this. But one final point on that is a point that will often be missed, is that just because there is relatively little in the form of AI-specific regulation doesn’t mean that there’s nothing. The UK has hundreds of years of common law and other statutes which will apply and which people should be considering as well. They will apply.

Chris Brown (03:23)

Yeah, absolutely. Thanks, Tom. I think that’s really helpful. Also, I just want to overlay the approach in the pensions industry from the UK Pensions Regulator, where I suppose it’s still quite minimal information that we’ve had from the Regulator. There’s no direct references to AI in the General Code but there are some indirect references.

And the Regulator has said in its most recent Corporate Plan (2024-2027), but also in its Digital, Data and Technology Strategy from October 2024, how it’s using AI. But really the most substantive comment from the Regulator is in a speech from the CEO in June 2025, where she said that schemes must understand the role of AI in the industry. So, clear recognition that schemes need to be taking action there, but just in a speech from the Regulator. But the update for our listeners is we’ve recently had  a little more substantive guidance on AI usage and that’s the PASA guidance which was published on the 28th of October 2025 which provides practical support for schemes, administrators and trustees to understand, and this is a phrase I’ve heard you use Tom, both the opportunities and the risks of adopting AI within administration. So, I think if there’s any of our listeners who haven’t spoken about that PASA guidance with their service providers, particularly their administrators, that’d be a good thing to do. All right, the second point we wanted to talk about, Tom, was that AI is being widely adopted in the pensions market, although listeners will be at different stages of their AI journey, won’t they?

Tom Whittaker (05:05)

Yes, absolutely. And so I will always defer to you and colleagues about what’s going on in the pensions market, so I’ll talk about it from a slightly higher level. So, it can be quite difficult to find comprehensive, evidence-based reviews of what the current state of adoption is of AI within the market. There’s a couple that I would usually go to. The first one is the Stanford University AI Index Report, which comes out annually, which looks at a range of different topics, including, say AI regulation, but one of the key topics is AI performance and another one is about AI adoption and investment globally. And what you can see is that AI systems are improving year on year against benchmarks, often beating human benchmarks. And you also see that the level of investment and adoption continues to increase across multiple jurisdictions globally year on year. So certainly there’s plenty of that enthusiasm but also the work and effort going behind it. The second one would be say, some sector-based reviews. So, there is one called Evident in the UK which looks at financial services organisations globally, actually, and within that you can see that there’s still massive investment by multiple different financial organisations. They’re reporting improved performance. They’re starting to use it in customer facing applications.

They’re saying that there is return on investment, although the way that you compare and contrast those figures is not quite complete. So, maybe you need to have a pinch of salt on that. And then you also have some other publications such as that from the UK government about what the AI industry looks like in the UK. A slight issue with that is that there is quite a bit of a time lag. So, it’s usually about six to nine months old by the time it gets published.

And as we know, a lot moves on in that time. But clearly, there is all this, there is a clear direction of travel. I suppose we should pick up on that point about AI hype and whether it’s a bubble. And so certainly, we’ve seen that some people are suggesting that or at least warning of that. So, the Bank of England has warned that there is a risk of an AI bubble.

You also see that there are some investors who are placing positions on the basis that there is a bubble. So, for those who are fans of The Big Short, whether in book form or film form, one of the investors there who shorted against the housing market and financially did well as the Global Financial Crisis in 2008, he’s taken some positions against some of the AI companies now. Obviously, he doesn’t have a crystal ball on certain things, but suggests that there are some areas where there could be some risk. Now, in my perspective, what the real risk is that there is a big difference between someone’s expectation and what happens in practice. So, they may think that what they are buying from a supplier is going to be really, really good. But in reality, actually, it takes a lot more time, a lot more investment, a lot more human effort to be able to achieve things.

And maybe it’s not actually achieving quite what they wanted in the way that they wanted it. Doesn’t necessarily mean that there was misrepresentation or contractual issue behind it, but I can certainly see in those circumstances, people start to look at the investments they made and the contracts that they signed. As to exactly when that will happen, though, my personal prediction is it’s still going to be a couple of years because people are still working out quite how they’re going to make use of AI and where they’re going to see the return on investment before they get to a point where they think about pivoting or looking back on what they’ve done.

Chris Brown (08:55)

Yeah. Okay, Tom, that’s really helpful. Thank you. So, I suppose feeding into that is, you know, sort of anecdotally in our Pensions team, we are seeing more and more clients want to talk about AI and interested in understanding the sorts of things you’ve just discussed, opportunities, risks. But we are seeing more and more of that. So anecdotally, it is coming more and more across our clients’ desks. It’s something they’re thinking about.

And I suppose that leads onto the third thing we wanted to talk about, which is sort of our clients, the firm’s clients understanding of AI is rapidly developing, isn’t it?

Tom Whittaker (09:34)

Yes. So, the way I would try and summarise it, over the last few years, there’s been a series of different waves of what’s generally happening in the market. The first wave after ChatGPT was released two and a bit years ago was, well, is one of two things. Either it was real excitement. And so people wanted to play and engage, or it was real concern and fear about potentially how others were going to use it. And so there was a quick move to then say ban certain applications, taking technical measures to do that, having policies to say that their employees or people that they’ve worked with shouldn’t be using AI. Then we moved through to a bit of a second phase where you start to see more tech startups making use of the foundation models. They were building applications, they were going to market, they were trying to get market share, people started to have demos. And so you started to see the market really engage with, well, what are the opportunities here?

And then where are the risks as well. And so starting to get a more nuanced understanding of both of those things. What I think we’ve moved through to more recently, although this is really a high-level generalisation, is a bit of a third wave, which is that you start to see a movement away from some of those demos and some of those pilots through to the attempts at scaling some of those solutions within larger organisations. And as a result, more people within those organisations are starting to see those opportunities and risks. They’re also being told to identify what their return on investment is going to be, identify what their use cases are going to be, explain how that they are making the most out of it. And as a result, they’re giving much deeper thinking as to the opportunities, the risks, how you implement these things practically, how they go about their AI strategy. They also realise the issues around data, about who owns what, about the quality of data they have access to.

And then they’re starting to think about their AI governance as well around the policies, framework, training, regulatory horizon scanning, the contracts, et cetera, that they need to get to grips with as well. The final bit that I’ll just mention, though, is that from my perspective is what’s happening for clients, usually where they are some form of organisation, but what we’re also seeing is what’s happening with individuals. And there are many people in many different circumstances who are saying that it’s the individuals, the Joe Bloggs on the streets, who can actually be much further advanced in their use of AI. Doesn’t mean they’re using it properly, doesn’t mean they’re using it well, but they are using it to get legal advice, to draft complaints, to draft requests, to draft legal documents. And as a result, there are multiple organisations who are saying that they are seeing an increase in volume of their workload as a result. They’re also seeing some of that work stretched out because people continue to try and take a point and they try to make arguments on things. And so it’s really changing the dynamics of how those organisations are interacting with those individuals.

Chris Brown (12:36)

Yeah, I’m really glad you mentioned that Tom, because I think that a number of trustees of occupational pension schemes, unless they’re a trustee of a very large occupational pension scheme, are unlikely to be in that third wave you talked about just yet. But certainly, on an individual basis, their members might be very tech savvy and be using AI and their service providers might be in that third phase. So, look, AI is here, changing the way we work now. One way we’ve been thinking about it for pensions clients is to break it down into sort of, who uses AI, exactly as you’ve just done. And there’s sort of three main risks, thinking about how to mitigate them and then how to maximise the opportunities. So, at a very high level, if you think about the sort of risk of member use of AI, so a member taking some scheme information and putting it into an open source chatbot, then you can mitigate the risk of the member getting misinformation by that by say putting warnings in member communications, in member newsletters and that’s something we’ve been doing for trustees discussing with various trustee boards.

Secondly, I suppose there’s the risk of a cybersecurity breach as probably the main one at a service provider. And Tom, we’ve talked before, haven’t we, about how trustees can mitigate that risk by reviewing their service provider’s contractual terms specifically with AI in mind. And thirdly, there’s a risk of the user themselves, so the trustee or perhaps a pension provider, using AI inappropriately and a big step to mitigate risk there is to undertake training, I think. So, there’s three actions there worth bearing in mind, I think, for our pensions clients as they put AI on their risk register.

Well, that’s all we’ve got time for, I’m afraid. Tom, it’s lovely to chat again to our listeners who might be feeling a little bit overwhelmed by how they approach AI. Hopefully we’ve left them with some clear ideas for first or next steps.

It was lovely to speak with Tom on another episode of the Pensions Pod and here on Cyber and AI Bytes If you’d like to know more about our Pensions and Lifetime Savings team and how our experts can work with you, then you can contact myself, Chris Brown, or Tom, or any of our team via our website. And as we say on every episode, all of our previous episodes are available on Apple, Spotify, our website, or wherever you listen to your podcasts. So don’t forget to subscribe and thanks for listening.

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us