Speaker

Transcript

Chris Brown, Director, Burges Salmon

Hi everyone. Chris and Helen back, here for another episode, episode three of season four, of the Burges Salmon Pensions Pod. Hi Helen.

Helen Cracknell, Associate, Burges Salmon

Hi Chris.

So, today we're going to be talking about AI in pensions and we've got Tom Whittaker here, senior associate at Burges Salmon. So, Tom you're a technology lawyer, you advise on AI regulation, horizon scanning and often giving training, writing and speaking on AI laws and regulations, great to have you on.

Tom Whittaker, Senior Associate, Burges Salmon

Great to be here, thanks all.

Chris

Okay Tom, so AI is critically important, I think the government said that it's talked about it wants to be a Science and Tech superpower, AI definitely on the government agenda. We're starting to see it talked about in the pension sphere, so we thought it would be a really good topic for a podcast, but partly for my own benefit, but our listeners too, I think let's go right back to square one please, and if I start with a very simple question and we can build up from there, so Tom, what is AI?

Tom

Thanks Chris, and that's a really good question to ask and one that everybody should be asking, as it's far too easy to get caught up in all the discussion about what it is and not really have clarity on it and the problem is, is that there is no agreed definition of it and people do interpret it in different ways.

So many listeners will be familiar with ChatGPT because they have been using it, or their colleagues have, or members have been, but there are various forms of AI out there and they all have different use cases, all have different issues and all have different risks. So the UK does have a definition in statutes but that already looks out of date and what we've seen over the last couple of years, are the Regulators are moving towards the recently updated OECD definition, so if you want to get your pen out this is what it is. An AI system which is machine-based that for explicit or implicit objectives infers from the input it receives how to generate outputs, such as predictions, content, recommendations or decisions, that can influence physical or virtual environments, and different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Chris

Is that a commonly accepted definition, is that all the key themes or principles that you think need to be in for something to be AI and also just take me back, a very, very noddy question, AI is an acronym standing for?

Tom

Artificial Intelligence.

So, a lot of Regulators and governments are moving towards that definition, however some are actively choosing not to define AI because their focus is more on the use case rather than the technology, but what we can do by breaking that definition down, you can see some of the themes that come out of it.

So, first of all it's some sort of machine-based system, second of all there must be some form of objectives, whether implicit or explicit, but often they are human-based objectives and there's a risk that the AI and the human objectives don't align, then, inputs can be turned into outputs. How they get turned into those outputs can be via different ways and with different levels of transparency and explainability, but the two key elements which are why there's Regulators talking about having a specific AI approach, are the adaptability of the systems so they can be used for various purposes, and also the potential autonomy of the systems, they can operate with limited or potentially no human oversight or input once they get running and therefore they increase the risk.

Helen

And Tom you mentioned use cases there, but would you be able to give us a brief summary of where we are with AI and how it is currently being used?

Tom

Certainly, so, incredibly fast-moving area as new technologies are being developed and that different people are finding different use cases in their sectors or domains, and also people are finding that it's more ubiquitous, it's easier to access, there are free open-source versions and so it's easier to get access to them, or to be able to integrate it with their other systems. So you see a lot more publicity about what those different use cases may be.

If we go back to that definition, you can see that about some of the things that AI is able to produce, so it can produce content, images, video, audio, so that's the example of ChatGPT or DALL E 2 or similar foundation model systems, but also it can help with recommendations, so it could tell you what you should be watching on Netflix or other streaming services, or it can help with decisions such as financial analysis and the decision-making that comes from that, or it could be on predictions and so you can see some AI systems being used to help predict member behaviour, user behaviour and then whatever it is that that organisation outcomes could be another one you could see.

Chris

In the finance sector, investment outcomes would be a would be of prime use case perhaps.

Tom

Absolutely.

Chris

And what you're saying about it no longer being in the hands of the few you know very much your Netflix example, you know your man in the street has now got access to chatbots and all sorts of different products and our pension scheme members will therefore have access to all sorts of different products, so this is no longer something to think about for the future, it is with us here and now.

Tom

Absolutely and everybody is then experiencing AI in some form, whether they know it or not, and as a result people are getting really high expectations about what it's capable of doing, but what they don't necessarily see is all of the hard work and the issues that are worked through in the background and all of the risks that are being worked through and managed.

Chris

Yes okay.

So, before we come to think about those risks challenges opportunities in the pensions industry, could you just give us an overview of the, you mentioned needing to have regulation, where are we with law and regulation in the UK at the moment?

Tom

So, in the UK the government has an AI strategy and digital strategy and it's produced what's called a white paper with its AI regulation framework, that was back in March of 2023 and shortly before some big updates from the EU with their proposed AI Act laws. UK has decided not to have AI specific regulations and its prime minister Sunak and others within government have said that they are in no rush to legislate specifically for AI.

So, what they're intending of doing is having existing Regulators apply existing regulations and laws but they should be considering how those regulations and their regulatory remit and how they operate need to be adapted because of the use of AI.

Now there's a risk between all of those Regulators of inconsistency or gaps, and so they are trying to coordinate, but there are also some cross-sectoral principles around things like safety, security, explainability, human governance and a few others. So, what you're likely to see over the next few years is further guidance being produced by Regulators in particular the ICO and the FCA who've been incredibly active, and that you'll also start to see some decisions and some enforcement coming out as well that will help provide some guidance.

Chris

Thanks Tom yes and the Pensions Regulator I believe has recently said about the importance of harnessing innovation and has noted that the rise of AI in pensions, which will come on to a minute, Helen I'll hand over to you, is you know powerful but with warnings attached to that as well so, whether the regulator will in time need separate codes of practices, guidance, and we'll get a lot of output from the regulator on AI and managing AI risks, will be really interesting to see.

Tom just a very quick question, which is, I don't know whether you've had a chance to look at the Autumn statement, did the chancellor say anything about AI there?

Tom

Yes there was one big announcement about further £500 million to be invested over the next two financial years and that should be seen in addition to the £900 million that was announced in the spring budget, so a lot of this is about bringing increased, compute capabilities and access across the UK but also as part of the wider government plans to be able to increase access to capital, access to talent and to develop the overarching AI ecosystem in the UK.

Chris

Okay, that's really fascinating and I really hope that this episode is interesting as an introduction to AI for our trustee listeners but for our employer listeners as well, who will no doubt need to be thinking about the you know incoming tide of AI innovation and change, not just in the pension sphere but over all what their businesses do.

So, thinking about pensions, Helen I might come to you, where are we seeing that AI might impact and disrupt the pensions industry?

Helen

Thanks Chris, so as Tom's emphasising, AI is rapidly evolving and money is being put towards this but I see three key areas with pensions that would lend themselves to AI.

So firstly, administration, so managing member data, being able to automate time consuming admin tasks, for example data processing, and then member communications is a big one, so you could get AI to respond to simple member queries or complaints, and also as Tom said that it can draft things, so they could be drafting the communications for your members to make sure that they are very clear and accessible and hit the right tone.

And then lastly, investment. As you said the financial sector will be a big one with AI and you could be using Ai and pensions to analyse data to aid decision making and reporting and also I'm sure it will come into place when things like the dashboard are up and running, and obviously more and more pension schemes are using apps and things for members to be able to access their data much easier.

Chris

I sort of wonder whether AI will disrupt the pensions industry, one in terms of doing new things so like you say maybe there will be an artificial intelligent IDRP first responder for example, something totally new, or it will disrupt and improve the way that services are currently delivered so, it's big discussion in the legal industry about how much ChatGPT and sort of AI research things can help with legal research and that sort of stuff, so you can see it improving and making more efficient sort of existing services and processes within the industry as well.

Tom if I just bring you on this, what are your thoughts on where AI is going to impact pensions?

Tom

Well, there's the old adage that we probably overestimate the short-term changes and impact and then we underestimate the long-term changes and impact as well, but it's certainly important to be thinking about the long-term because of the number of foundational things that organisations need to have in place in particular around their people and around their data.

I think if we go back to that definition, we see that there are various ways that AI can be used, one in terms of producing content, so that's where Helen was talking about member communications and and improving accessibility but then you also see around recommendations and so you were talking about investment decisions and being able to analyse your data, potentially identifying themes that you were otherwise unaware of and then also helping with other recommendations as to then what you should be doing in light of that, and so that should be feeding into any decision-making that's going on.

I think it maybe a useful way of thinking about this is to break it down into two different things, first is about people and purpose. What do people in your organisation, or who you work with, need what are their concerns and what could be done better for them, so for example could member communications be made more clearly, could you use ChatGPT to help suggest rewrites, could you better analyse member complaints to understand understand common themes and issues from that to be able to summarise it more clearly when you're trying to deal with them, but with all of those points especially around the purpose, AI may help, but then again it may not, there may be other solutions as well. Human element may be much better, both legally, commercially, practically and in many other senses as well.

The second thing that to pick out is about thinking about your organisation, so what data do you have? It's not just about the quantity of data, it's also about the quality of data and that's a particular problem within financial services and the pensions industry where a lot of it is historic, maybe stored in different places, and it may be stored in different formats in different, in different types and and with different structures, meaning that quality is not necessarily there to be able to build out your data set and to be able to help build your models.

And then also think about who do you have within your organisation, are they technically savvy, do they understand these different use cases and the technology, do they understand the risks with all of these as well, so trying to get some of those foundational bits in place for the long-term is really important too.

Helen

I think just picking up on the data point Tom, is so apt when schemes come to their end of life and they're looking, for example if they want to, to buy in with an insurer suddenly you need to clean your data, find the highest quality and be giving warranties about the quality of that data, so AI could really come into its own there.

Tom

Absolutely, possibly analysing that in a way that humans otherwise just wouldn't be able to, spotting issues, but all with all of this it needs to not replace what humans do, what the lawyers do, what the consultants do, it needs to augment what they do.

Chis

And I just want to mention sort of that comment about the human touch, so trustees role of course is fiduciary and Tom do you think we could ever be in a position where AI could be taking decisions so that you completely lose that human element and the concept of a moral touch?

So often trustees will take decisions about dependence benefits on death cases, they'll be having having to sort of apply a bit of heart and humanity to some decisions and in fact one of the cases I like to come back to in the investment sphere is an overarching description of investment GTs, is a case from the 1880s I think which is re Whitely where trustees invest not as if they were investing for themselves, and I'm summarising the the judgment here, but as if they were morally bound to provide for the people they were investing for, for members, so there's that sort of moral connection and could AI ever have that or is it something that aids trustees but you have to keep trustees there?

Tom

It's definitely the latter, it aids trustees and it aids all of the other humans who are providing input or advice, but it never replaces them. There's a couple of reasons in particular for that, one is that AI is largely about predictions, it's using the data sets, it's using the models in order to predict what the outcome would be but that's distinct from what the outcome actually is and what the outcome should be. And then the second point is AI is not able to understand, it is not able to determine moral judgments, it's not able to then take another person's perspective. These are all innately human qualities and will still be vital to a trustee fulfilling their duties.

Helen

And Tom on the downsides of AI, I have read some things about discrimination in AI and how it's introducing bias already?

Tom

That's absolutely right and you can break it down in a number of ways and these are particular risks that need to be considered, so data sets themselves may contain bias depending on where they're extracted from and what it is that they demonstrate. How data is processed or how those models are then developed may introduce or ingrain bias, or exacerbate bias, depending on how they're being developed and how the weights and the measures are being put in place around those models.

And then also remember the difference between the AI output and then the decision, so the output will be used by human decision makers and there is a risk that they use the output in a biased or an inaccurate or an inconsistent way, there's a risk that humans will just default to whatever the computer says without using that human touch, without using that human criticality and there's a risk there that ultimately it's not improving the position at all, it's actually making things worse.

And just thinking about it from the flip side from members or employees, when they are using the different technology not everybody will be using it in the same way and not everybody is able to use it in the same way as a developer or as the provider or deployer may be expecting, so we hear about the digital divide across different age groups and different socioeconomic groups and there's a real risk as more becomes digitalised and that there's a greater use of AI, that there were some groups who are in effect left behind, and then excluded from certain things.

Chris

Yes and you can see that particularly in the pensions industry and we need to make sure that we don't leave those people behind.

Tom, just a few questions, I mean this is a fascinating pod, I think we could you know we could talk all day, just a few questions just to sort of wrap the pod, so firstly just one from the left field I was reading, are you able to say anything about the environmental impact of AI, because trustees need to consider environmental, social and governance factors in their decision-making and particularly in investment. Where does AI sit, how environmentally friendly is AI, because that's lots of data being churned in these these machines I'm sure?

Tom

It's a very good and very topical point that has come to people's attention even more so in the last couple of years as we have been using ChatGPT and other large language models and foundation models more, and that the companies behind that have started to produce and disclose some of the environmental performance and consequences around it and you see that there are a number of studies that look into that as well. So, there is a potentially significant environmental impact around all of that, obviously those sorts of consequences need to be balanced against all of the other output and consequences that there may be from using AI, but you do see this starting to creep into the regulatory landscape, so it's within the EU's AI Act drafts, in the original one from 2021 there is very little talked about the environmental performance and what we've seen in in the more recent ones towards the end of 2023, are proposals for specific reporting around environmental performance as well, and to ensure that there is that information gathering too and so yes this environment performance, indeed other ESG factors, are also very important to be aware.

Chris

Yes, thanks, there's a fascinating point there about duties, so I know that some in the industry are calling for, so if you think about that the call at the moment for DB assets to be invested sustainably, there's a duty there around can trustees take account of real world environmental impacts as part of their investment and not just the sort of ESG impact on their scheme, and that thought piece applies wider as well to what service providers you're using and how you go about your business generally.

Helen

There's lots of different ideas that we've thrown around today but Tom are there are some specifics that our listeners can drill into, for example if they're trustees or or even employers?

Chris

Yes what should trustees be doing now, Tom?

Tom

Absolutely, well I see this across a number of different sectors and different types of organisations, but there are common themes to all of them, so the first one is getting the right people around the table, both internally but also externally in terms of their advisors, making sure that you get people who have different perspectives diversity of thought but also diversity of understanding and so you get the legal, the compliance, the financial, the trustees, all of these different people.

Chris

Do you mean in terms of building knowledge?

Tom

Well it's first of all working out who needs to know, and then what they know, and then you can start to build out what you need to learn further and what you need to keep your eyes on, so the first one is getting the people there, the second bit is then keeping your eye on what's going on both in terms of the technology, the use cases and the regulations and so there is that horizon scanning piece and that's something actively working on with a number of clients and so seeing what there could be in the future, you can see that there are core themes across all these different regulations and you can see the direction of travel as well, so building that long-term perspective in there is particularly important.

And all of that then leads to just working out what questions you need to ask, how does it fit within your existing governance framework, your risk management framework, and the other policies that you have in place and then you can start to work out where you may need to make some amendments, from where you may need to give further thoughts, but it also means that you can start to focus in on where are those opportunities, so you can think about those use cases, how they would work in practice and what the risks may be.

Chris

Tom, that's great, so just distilling that down, please, if you had one eat takeaway for our listeners what would it be?

Tom

So I'd describe it as being optimistic about the opportunities but realistic about the risks, and what that means in practice is that you should be exploring the opportunities, because there is the potential to do things better, so there needs to be that learning and that engagement about both the technology and the use cases but you also need to be considering what those risks are, see what the risks are and how they're playing out in different sectors with different technology and different use cases, and start to think and try to have some foresight as to how they may affect you as well.

Helen

Thanks so much for coming on the podcast, Tom, and giving your valuable insights.

We've covered a lot of background today, but would you be willing to come on in the future and go through some specific test cases once they've occurred?

Tom

Absolutely I think that' be really valuable.

Helen

Yes, that'd be really helpful, thank you.

Thank you for listening to the Burges Salmon Pensions Pod. If you'd like to know more about how our experts can work with you, please contact myself, Chris or Tom via our website. All of our episodes are available on Apple, Spotify or wherever you listen to your podcasts. Thanks for listening.