This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

PASA publish guidance on AI use: key takeaways for trustees

Picture of Callum Duckmanton
Passle image

Earlier this week (28 October 2025), The Pensions Administration Standards Association (PASA) published new industry guidance titled: ‘Use of AI in Pensions Administration – Embrace the Opportunity with Caution’. 

This is the first guidance document published by a major industry body in respect of AI usage in the industry. Whilst the guidance does not have any regulatory power (as it is not from The Pensions Regulator or the government, for example), it marks a significant step forward for AI in the pensions space. 

Risks of AI use in administration… and recommended mitigation steps

There are many benefits to using AI – which we have commented on previously here. However, there are also risks involved with its use. The PASA guidance sets out its recommended “key risk management considerations”. In essence, trustees should engage promptly with their administrator to understand how they are using AI and to confirm that they have adopted adequate safeguards.

By way of further information:

Risk

PASA comment

PASA recommended mitigation

Our comment

AI capabilities and limitations

AI has limitations and should be used to “augment, not replace” humans.

Trustees should ask administrators to comment on “the AI technologies they’re deploying, including both capabilities and limitations”.

Understanding AI’s limitations is a key step. Engaging with advisers to ensure that they are aware of the limitations (e.g. that users of AI at the company have undertaken appropriate training) is a commendable step to take.

Trustees may also wish to undertake training themselves.

Ensuring data privacy and security

AI “heightens the risk of data breaches and unauthorised access”.

“Understand how data is being stored and used, and that adequate data security is in place”.

Trustees should engage with their advisers (their administrator in particular, as they will hold the highest volume of and most sensitive member information) to understand their AI usage and that appropriate cyber security safeguards are in place.

We recently commented on the key cyber risks of AI and steps trustees should take in response – here.

Mitigating data bias 

AI “can inadvertently perpetuate biases present in data”.

“Understand how data bias is being mitigated, and how saver feedback is being incorporated”.

For example, Amazon used AI for sifting of job applications. As the AI was trained on the basis of predominantly male applications, it learnt to prefer these, and so discriminated against female applications.

Decision-making transparency

AI decision-making can be “opaque”.

Ensure all decision making is reviewed” and continue making material decisions based on advice and documentation.

Under the Data (Use and Access) Act 2025, individuals must be provided with information about any significant decisions taken in relation to them based solely on ADM (automated decision making), and they also have the right to contest a decision to use ADM in respect of them.

We recently commented on the impact of the Act – here.

Human review and oversight

Importance of “balancing AI outputs with human judgement… AI systems are tools to augment, not replace”.

Contact administrators to confirm “how AI is implemented alongside the team”.

Trustees should take steps to ensure that AI is not being used to provide financial advice to members.

Scheme-Specific AI Applications

All pension schemes are unique.

Ensure scheme specifics are understood before deploying AI.

Large schemes, or schemes with a sponsor is the technology industry, may wish to consider implementing a Chatbot for their members.

Use of AI outside of administration

Outside of engagement with the administrator, PASA also urges trustees to consider:

  • “including warnings/caution to savers in communications about using AI to make financial/retirement decisions”, and
  • “seek[ing] out training and support in using AI”.

We strongly recommend use of such “warnings/cautions”, and have provided appropriate wording and also AI training to trustee clients. If you are interested in either of these, or wish to discuss how AI might impact your scheme, please contact Callum Duckmanton or Chris Brown. 

Comment

AI offers an exciting opportunity for trustees to enhance the member experience (such as by using AI-created avatars that are tailored to that specific member) and streamline administration in order to save costs.

However, AI also brings with it significant risks through…

  • Adviser-use (the focus of the PASA guidance). Trustees should engage with their administrators promptly to understand how they are using AI. See above and the PASA guidance for the key risks and recommended actions to mitigate these.
  • Trustee-use. A first step here may be to undertaken training on AI.
  • Member-use. For example, through cyber security concerns and hallucination. Trustees should include warnings to members to mitigate this risk.

To hear more about the practical steps that we recommend taking to mitigate AI risks, you can listen to our podcast episode on the topic here.

This article was written by Callum Duckmanton and Richard Pettit. 

The PASA guidance suggests that trustees 'consider including warnings/caution to savers in communications about using AI to make financial/retirement decisions'. This is a step we strongly recommend trustees take.