Steps trustees should take to mitigate the cyber risk of AI

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
Two hot topics in the pensions industry (and wider society) are artificial intelligence (AI) and cyber security.
AI is undoubtedly an incredibly useful tool in a trustee’s arsenal for running a pension scheme. However, it does increase the risk profile of cyber security for pension schemes via three routes (as set out below).
We recommend that trustees take steps to mitigate this risk, whether the trustees are using AI for scheme business or not.
Regulatory context
There is no specific AI guidance from The Pensions Regulator (TPR), and no real mention of AI in the General Code.
However, in June 2025, Nausicaa Delfas (TPR’s CEO) said in a speech that trustees “must understand the role of AI in industry”. Whilst the speech holds no binding / regulatory authority, it does signal the direction in which TPR may be heading in terms of their view on whether trustees should use AI for scheme business.
How AI can improve cyber security
AI can be a powerful tool for mitigating cyber risk…
It can detect fraud and phishing attempts, and act as an early-warning system for cyberattacks.
TPR itself uses AI to identify scam websites.
However, AI can also increase cyber risk… whether the trustees use it or not.
1. Scheme use of AI
Trustees should only use AI for scheme business if it is via a ‘closed source’ tool (has stringent data security procedures in place), rather than an ‘open source’ tool (a publicly available system that uses the input data to train the AI and does not keep inputs confidential).
We suggest that trustees may wish to engage with the sponsoring employer to explore using any ‘closed’ AI tools that the sponsor has contracted to use (and note that trustees still employed by the sponsor will have access to any such tools anyway).
Failing that, in order to use AI, trustees will need to add a further third-party service provider to use AI. This adds a further cyber security vulnerability for the pension scheme – after all, it is understood that some of the recent UK cyber incidents occurred through a service provider.
However, by conducting appropriate due diligence into the AI tool provider and putting in place appropriate contractual terms (e.g. on liability and notification requirements of a breach), this risk can be significantly mitigated.
2. Third-party service provider use of AI
Use of AI by a scheme’s service providers themselves also produces further cyber risk – noting that, as above, it is understood that the recent UK cyber incidents occurred through a service provider.
The General Code states that trustees “should… ensure service providers are able to demonstrate that they have adequate internal controls relating to the services they provide”, which includes cyber security.
Trustees should therefore engage with service providers to understand how they are using AI, and to ensure that they have sufficient internal controls in respect of the AI use. Trustees should also ensure that appropriate contractual terms are in place with service providers (e.g. on liability and notification requirements of a breach).
3. Member use of AI
Further risks arise from member use of AI. Generative AI platforms, such as ChatGPT, pose a risk if members input sensitive pension data to understand scheme communications, because:
ChatGPT, for example, is trained on data that users input, and
Any data that is stored by the AI company could be stolen in the event of a cyber incident.
This should be a concern for trustees for two reasons. Namely:
It is arguably within a trustee’s fiduciary duty to take proactive steps to protect members against foreseeable risks of harm in connection with the scheme (although this is a complicated area), and
Any breach of that data could risk the cyber security of the scheme.
Trustees should therefore consider including warnings in member communications, advising against the use of ‘open’ AI for interpreting scheme communications and documents – especially where it includes sensitive information regarding the member or scheme.
AI presents both opportunities and risks for pension scheme trustees – including in respect of cyber security.
Ultimately, there are three routes through which AI increases the risk profile of cyber security for pension schemes, each of which can be appropriately mitigated:
Potential Use | Mitigation |
1. Trustee use of AI | Conduct due diligence and negotiate contractual provisions with any AI provider. |
2. Third-party service provider use of AI | Engage with service providers to understand their AI use and internal controls in respect of the AI use, and negotiate appropriate contractual provisions with the service provider. |
3. Member use of AI | Issue a warning against using ‘open’ AI, in particular on the cyber risks involved. |
AI has tremendous opportunities for pension schemes, although trustees must be aware of the cyber risk of the technology… whether the trustees themselves use it or not.