This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

Mitigating ‘Hidden’ AI Risks Toolkit published

Picture of Tom Whittaker
Passle image

The UK's Cabinet Office Government Communications team has published a toolkit on mitigating hidden AI risks. These were learned during its scaling of ‘Assist’, an AI tool scaled to 200 government organisations that helps drafting, both by improving speed and consistent use of communications best practice.

This article explains the key aspects of the toolkit, including the hidden risks identified and framework for identifying and mitigating them.

Who and what is the toolkit for?

The toolkit is designed for individuals and teams responsible for implementing AI tools and services within organisations and those involved in AI governance.  It can be used for new or existing AI systems, including to:

  • Assess the barriers your target users and organisation may experience to using your tool safely and responsibly
  • Pre-empt the behavioural and organisational risks that could emerge from scaling your AI tools
  • Develop robust risk management approaches and mitigation strategies to support users, teams and organisations to use your tool safely and responsibly
  • Design effective AI safety training programmes for your users
  • Monitor and evaluate the effectiveness of your risk mitigations to ensure you not only minimise risk, but maximise the positive impact of your tool for your organisation

Whilst designed for the public sector, Government Communications Service's stakeholder engagement suggested that some of the challenges and barriers were also faced in the private sector, so the toolkit is likely of use to the private sector, too.

AI hidden risks

The toolkit notes that various AI risks are often front of mind - deepfakes, bias, hallucinations. Many of these risks are addressed already in existing practical frameworks.

However, the toolkit draws parallels with aviation, in particular, that risks can come from more mundane sources, ones which are hidden and for which it is difficult to identify the origin.  These can be difficult to foresee, given limited historical examples from which to learn.

There are limitations with existing approaches to AI safety:

  1. De-risking the AI tool itself using technical measures and guardrails - however, this does not address many ‘hidden’ risks.
  2. Ensuring human oversight aka ‘human in the loop’ - however, people can be ineffective at judging algorithmic outputs and determining whether and how to override those outputs
  3. assigning risk ownership to the users - but dependent upon the users who may be fallible.

Consequently, the toolkit proposes a new approach - surfacing ‘hidden’ risks.  This requires proactively anticipating potential risks and implementing effective preventative measures.   This requires:

a systematic understanding of the likely underlying causal mechanisms that lead to unintended consequences coming about as a result of AI use: these causal mechanisms are the day-to-day decisions and actions taken by individuals, teams, and organisations

The Toolkit categorises six ‘hidden’ risks arising from organisational AI roll outs:

Quality Assurance

Risks arising due to people using inaccurate or average quality outputs in their work.

Task-tool Mismatch

Risks arising due to the use of tools for purposes for which they weren’t designed or which it doesn’t perform well at.

Perceptions, Emotions and Signalling

Risks arising due to emotional responses induced by AI roll out, people’s perceptions and attitudes about AI or the signals sent by an organisation’s adoption/use of AI.

Workflow and Organisational Challenges

Risks arising from the work required to embed AI in an organisation or changes to people’s ways of working.

Ethics

Risks arising from violations or threats to ethical standards and norms or legal rights (e.g. Equality Act 2010), or that are not in line with organisational guidelines and codes of conduct.

Human Connection and Technological Overreliance

Risks arising from reductions in, or removal of, humans from roles or functions or the overreliance on technical solutions for complex problems.

A framework

The toolkit then provides a detailed framework considering questions to prompt unearthing the potentially hidden risk, understanding potential consequences, and mitigating steps.

In summary, Government Communications Service used it as follows:

  1. Set up a multidisciplinary and diverse working group
  2. Surface potential hidden risks for your tool
  3. Review and prioritise risks
  4. Monitor and develop mitigation strategies for your risks
  5. Implement ongoing monitoring and review mechanisms

The toolkit then provides greater detail on each.

If you would like to discuss how current or future regulations impact what you do with AI, please contact  Brian WongTom WhittakerLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

Related services

Related sectors