Mitigating ‘Hidden’ AI Risks Toolkit published

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The UK's Cabinet Office Government Communications team has published a toolkit on mitigating hidden AI risks. These were learned during its scaling of ‘Assist’, an AI tool scaled to 200 government organisations that helps drafting, both by improving speed and consistent use of communications best practice.
This article explains the key aspects of the toolkit, including the hidden risks identified and framework for identifying and mitigating them.
The toolkit is designed for individuals and teams responsible for implementing AI tools and services within organisations and those involved in AI governance. It can be used for new or existing AI systems, including to:
Whilst designed for the public sector, Government Communications Service's stakeholder engagement suggested that some of the challenges and barriers were also faced in the private sector, so the toolkit is likely of use to the private sector, too.
The toolkit notes that various AI risks are often front of mind - deepfakes, bias, hallucinations. Many of these risks are addressed already in existing practical frameworks.
However, the toolkit draws parallels with aviation, in particular, that risks can come from more mundane sources, ones which are hidden and for which it is difficult to identify the origin. These can be difficult to foresee, given limited historical examples from which to learn.
There are limitations with existing approaches to AI safety:
Consequently, the toolkit proposes a new approach - surfacing ‘hidden’ risks. This requires proactively anticipating potential risks and implementing effective preventative measures. This requires:
a systematic understanding of the likely underlying causal mechanisms that lead to unintended consequences coming about as a result of AI use: these causal mechanisms are the day-to-day decisions and actions taken by individuals, teams, and organisations
The Toolkit categorises six ‘hidden’ risks arising from organisational AI roll outs:
Quality Assurance
Risks arising due to people using inaccurate or average quality outputs in their work.
Task-tool Mismatch
Risks arising due to the use of tools for purposes for which they weren’t designed or which it doesn’t perform well at.
Perceptions, Emotions and Signalling
Risks arising due to emotional responses induced by AI roll out, people’s perceptions and attitudes about AI or the signals sent by an organisation’s adoption/use of AI.
Workflow and Organisational Challenges
Risks arising from the work required to embed AI in an organisation or changes to people’s ways of working.
Ethics
Risks arising from violations or threats to ethical standards and norms or legal rights (e.g. Equality Act 2010), or that are not in line with organisational guidelines and codes of conduct.
Human Connection and Technological Overreliance
Risks arising from reductions in, or removal of, humans from roles or functions or the overreliance on technical solutions for complex problems.
The toolkit then provides a detailed framework considering questions to prompt unearthing the potentially hidden risk, understanding potential consequences, and mitigating steps.
In summary, Government Communications Service used it as follows:
The toolkit then provides greater detail on each.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Brian Wong, Tom Whittaker, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team. For the latest on AI law and regulation, see our blog and newsletter.