This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website
Podcasts

The Pensions Pod S6:E2 – AI in Pensions: Practical Insights for Trustees

Picture of Chris Brown

In this episode of The Pensions Pod, Chris Brown and Callum Duckmanton discuss how AI is shaping the future of pensions. They cover trustees’ legal duties, the benefits and risks of using AI, practical steps for implementation, and insights from a recent industry event. A must listen for trustees and pensions professionals looking to stay ahead in a changing landscape.

Here is the link to the article referenced in the podcast.

Chris Brown, Partner, Burges Salmon

Hello everyone and welcome to episode two of season six of the Burges Salmon Pensions Pod. As usual, I am Chris Brown. I’m a partner in our team and I am joined today by Callum Duckmanton. Callum, very good to have you on the podcast.

Callum Duckmanton (00:14)

Hi Chris, yeah, absolutely great to be here. So just as a quick introduction to me,

Chris Brown (00:21)

Yes, please do introduce yourself.

Callum Duckmanton (00:23)

I’m a solicitor in the Burges Salmon Pensions and Lifetime Savings Team and I have a focus on cyber AI and also a bit of public sector work as well.

Chris Brown (00:33)

Yes, thanks very much, Callum and in fact, you’ve given a teaser of what we’re going to be talking about on today’s episode, because yesterday, Burges Salmon had the privilege of sharing a room with Quietroom, the communication specialists in pensions. And we hosted an event called AI and the Future of Pensions. And there was a fascinating discussion, lots of talks on communications and how to make them suitable for AI readers, on the future of AI.

There was a really interesting conversation by someone called Oliver Payne from Ford UK who talked about their chat bot, the pension’s chat bot that Ford have rolled out. And also we had talks from Burges Salmon from Tom Whitaker and Madelin Sinclair McAusland who are both directors in our AI and tech team. And also, you and I did a double act looking at trustees’ fiduciary duties and how they fit in with AI. So that is the subject of this podcast.

So Callum, we’re not going to repeat the talk that we did yesterday here, but we thought it would be helpful to just pull out three to four key points that we made yesterday that will be interesting to trustees, but also to, well, anyone really who’s considering using AI in their business. Is that all right with you?

Callum Duckmanton (02:00)

Yes, absolutely. And point one from that talk is probably the duty of care and duty to act prudently.

Chris Brown (02:08)

Yes it is. What do you think about that?

Callum Duckmanton (02:10)

So just to add some legal background before we get into the details, so the Trustee Act 2000 requires the trustees to act with reasonable care and skill and there’s also some case law so Re Whiteley requires the trustees to act prudently. That’s an old case from the 1880s.

Chris Brown (02:26)

And it’s context, sorry Callum if you were about to say this, but it’s in the context of trustees and how they invest. I think it’s a really nice description of how trustees should carry out their business generally. Because Re Whiteley what it says is that trustees have to invest and more generally act as if they were morally bound to provide for beneficiaries. So it brings in that moral obligation to being a trustee to acting prudently.

Callum Duckmanton (03:00)

Absolutely agree. The general code, so a bit more recent than Re Whiteley adds a bit more detail on how trustees should act prudently. The general code requires trustees to keep their knowledge up to date and also to use available skills. And in my opinion, from all the digging we’ve done into AI, AI is an absolutely useful tool for trustees to manage their scheme better.

So just by way of an analogy, Chris, for me, a trustee not using AI would be like a trustee refusing to use emails or phones to communicate because by failing to use AI, they would be hindering their own ability to act and therefore, in my opinion, would be acting imprudently.

Chris Brown (03:43)

Yeah, and you’ve also got the Trustee Act 2000, haven’t you, which says that trustees have to act with reasonable care and skill. I suppose given that AI is coming into the industry, given that The Pensions Regulator’s CEO has said that trustees should have an understanding of AI, that in order to be reasonably careful, that you need to consider things that might help you carry out your duties. That’s the prudent thing to do.

You know, it’s incumbent within the nature of being prudent and loyally fulfilling your moral obligation to consider tools like phones, emails, AI that might help you carry out your duties. And of course, different boards will do that to different extents. But I think where we are now, we are seeing trustees in trustee meetings that we go to want to explore AI and learn more about it.

Callum Duckmanton (04:33)

Absolutely agree and probably just worth adding that The Pensions Regulator hasn’t really said too much on AI and that speech that you mentioned in June is one of the few things that it has said providing a steer to trustees in that they should take steps to understand the role of AI in the industry at the very least.

Chris Brown (04:54)

Yeah, absolutely. So that was the thought piece for why we thought this would be an interesting topic really, because there isn’t yet that industry guidance. There isn’t industry specific regulation on AI that actually trustees, when they’re thinking about AI, they need to go back to their core and their fiduciary duties. So look, AI helps efficiently manage the trust and manage the trust assets, which is a core duty of trustees as well, isn’t it?

I said we will mention three or four points from the talk yesterday. If our second one is about efficiently managing the trust, what’s interesting here? What would you say here?

Callum Duckmanton (05:28)

Absolutely, so the Pensions Act 2004, just to provide a little bit of the legal background again, the Pensions Act 2004 sets out that trustees need to implement internal controls to manage risk and the internal controls must be proportionate to the circumstance of the scheme. And AI helps to automate tasks, improve compliance and also save time and money by doing things a bit more efficiently.

So in my opinion, a trustee board that’s not using AI will be less efficient than a trustee board that is using AI. And therefore, the trustee board that isn’t using AI will be risking breaching their duty to properly and efficiently manage their trust.

Chris Brown (06:11)

Yeah, that’s interesting. Albeit efficiency mustn’t come at the cost of control, one of the things that was a key point coming out of lot of the talks yesterday was that AI works well, but with human oversight. And I think that really goes back to the morality and loyalty aspects of being a fiduciary. And actually there’s an important point to note, isn’t there, in terms of data protection legislation, which was a point in Tom and Madi’s talk yesterday. I’m saying yesterday. When this podcast goes out, the talk won’t have been yesterday, but the talk we did was on the 16th of September and we’re recording this on the 17th. And the point that Madi mentioned was around under the new data protection legislation, if you have automated decision making, so suppose you’re operating a tool where AI makes certain investment decisions for members, then that is permitted, but there are safeguards around the use of that. And in particular, there are various communication obligations. So trustees use AI for efficiency, but it brings about extra risks and things that trustees need to think about and comply with.

Callum Duckmanton (07:18)

And just to say, Chris, if anyone does want to read a bit more into automated decision making and the relevance for trustees and the pension industry, we have got an article out there written by Callum Duckmanton if you want to go and read that as well.

Chris Brown (07:34)

Yeah super, available on the Burges Salmon website.

Okay, so we talked about prudence, we’ve talked about efficiently managing the trust. AI where we and others are seeing it disrupt the industry the most is probably in the sphere of communicating with members, member engagement, and the member experience. What are your takeaways here?

Callum Duckmanton (07:56)

Yeah, so the General Code requires for member communications to be of good quality and for them to be accurate, clear and tailored. And the relevance of AI here is that it can be used to create personalised and accessible communications.

So for example, it can produce an AI avatar that is tailored to the specific person that’s more likely to be able to engage that person and use, for example, if they speak a foreign language then to be able to speak that different language. So to propose a use case for AI in this context, imagine a scheme is considering running a mass communications exercise. So maybe a Section 67 informed consent exercise. With AI, the trustees could communicate with the membership through personalised videos rather than a traditional long form letter, which may result in more take up of the informed consent exercise and therefore save costs for the scheme.

Chris Brown (08:54)

Yeah, okay, what are the concerns that trustees should be thinking about if they’re using AI to communicate with members?

Callum Duckmanton (09:01)

So there’s probably two. The first one is by the AI providing financial advice. And that would be a breach of financial services regulations. So trustees definitely do not want to be doing that. But that risk can be mitigated by engaging with the AI provider properly to ensure that isn’t happening.

And the second one is discrimination. There’s various case studies of AI producing biased and discriminatory outputs due to the material it’s been trained on also being discriminatory. So you can imagine in a pension schemes context that happening because of the membership, for example, being predominantly male, that could happen with a scheme as well. And you would end up having a potential breach of the duty to act fairly between members.

Chris Brown (09:51)

Which is one of the trustees’ core duties. And as trustees and providers are getting to grips with implementing AI, the last thing the industry needs is another discrimination / equalisation issue on their hands. But that risk could be overcome with training about how the AI model works and making sure that you look out for possibilities of discrimination.

There was that example with Amazon, wasn’t there, a number of years ago now, but Amazon realised that their AI algorithm for recruiting employees was biased against women because, as you say, it was trained on CVs that Amazon had received over the last 10 years, over the previous 10 years, most of which had come from men. So the AI algorithm trained itself to favour applications like those, which were from men rather than from women.

Okay, so we have talked about communication, we’ve talked about efficiently managing the trust and prudence. Just want to end now on a point around AI in governance, which is where a lot of people are using AI, I suppose. Lots of people will be familiar with Co-Pilot in Teams, and that raises interesting questions for fiduciaries coming to decisions around, well, might the presence of an AI transcript in a meeting stifle discussion. I don’t know if a trustee will ask the stupid question or put up what’s perceived to be the obviously wrong answer for it to be knocked back down. So there’s a risk around that.

Thinking about trustees’ legal duties, there’s no legal duty to record the reasons for a trustee decision. We often advise on this in the context of internal dispute resolution procedure cases. Often it’s better for trustees to have a succinct record of the rationale for their thinking, but not a full transcript. So we’re seeing it in governance too.

Callum Duckmanton (11:54)

Absolutely, but that is a risk that can be mitigated. We’ve recently advised a trustee client on implementing an AI policy in respect of trustee meetings that appropriately mitigates those risks that you just mentioned.

Chris Brown (12:09)

Yeah, indeed we have. So look, that’s just a flavour of some of the points that came out in our talk yesterday, which was just one in a range of interesting talks all over the afternoon. I learned some quite interesting things from the Quietroom talk about how the way you write, the way you write as if it’s going to be read by a large language model, which was really fascinating.

So Callum, for our podcast listeners, can you just leave us please with key actions that, you know, particularly trustees listening to this might want to take away, things that trustees could be doing now when thinking about AI.

Callum Duckmanton (12:48)

Absolutely. So I’ve got a list of a few steps we would recommend and to start off with one that we haven’t really touched on in this episode so far, but one that I think is really important to protecting members. So providing a warning to members against using AI to understand member communications that they receive from their pension scheme.

Chris Brown (13:12)

That’s something we’ve seen a few trustee boards think about wanting to put in their newsletters, in their communications with members.

Callum Duckmanton (13:20)

Absolutely. So we’ve drafted wording that’s gone on member funding statements, for example, and that warning to members focuses on the risk of putting the information into open-source AI due to the potential cyber security risk there and also the potential risk of hallucinations by the AI.

Chris Brown (13:42)

Yeah, absolutely. That’s brilliant and is one thing we’re seeing trustees doing. What else should trustees be doing at the moment?

Callum Duckmanton (13:49)

Yeah so just to maybe go through the remaining ones a bit more quickly. So we recommend that trustees consider taking up training so that they can understand the risks that we’ve touched on.

Chris Brown (13:59)

Which goes to the point that the regulator, the regulator’s CEO made in that speech.

Callum Duckmanton (14:04)

Absolutely. And another thing is liaising with service providers to understand how the service providers are using AI because that’s just as important as how the trustees personally are using AI.

Chris Brown (14:18)

Absolutely.

Callum Duckmanton (14:19)

Also adding it to the Scheme’s risk register. So of course, you should be conscious of the benefits that AI has to offer, but there are also risks that you should be aware of and have plans on how to counter them. For example, providing a warning to members as I mentioned a moment ago.

Chris Brown (14:35)

And finally, trustees and anyone listening to this who is interested in AI will want to take advice from a number of advisors, including legal advice on options, solutions, and as you say, mitigating risks. And as you say, that conversation probably starts with some training about AI. And you and I are very happy to come into trustee meetings to talk about AI with anyone who would find that interesting, aren’t we?

Perfect. Perhaps could you leave our listeners with one thought on AI in one line, please?

Callum Duckmanton (15:00)

Absolutely.

I think AI has lots of exciting benefits. There are risks involved, but those risks can absolutely be addressed with some proper safeguards. So trustees should really be looking at understanding AI and how you can use it to benefit your members.

Chris Brown (15:25)

Thanks ever so much, Callum. I completely agree with that. And it’s been great having you on the podcast.

Callum Duckmanton (15:30)

Thank you very much.

Chris Brown (15:35)

Well, if you’ve enjoyed listening to that episode and you’d to know more about our Pensions and Lifetime Savings team and how our experts can work with you, then please do contact myself, Chris Brown, or Callum, or any of our team via our website. And as we say every episode, all of our previous episodes are available on Apple, Spotify, our website, or wherever you listen to your podcasts. Don’t forget to subscribe, and thanks for listening.