AI assurance techniques: UK publishes an updated portfolio

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The UK government's Responsible Technology Adoption Unit (a directorate within the Department for Science, Innovation and Technology, and formerly the Centre for Data Ethics and Innovation) has published an updated Portfolio of AI Assurance Techniques. The portfolio offers access to:
Explore new use cases showing how real-world examples promote trustworthy AI development. Essential for anyone designing, deploying, or procuring AI.
Standing currently at 72 use cases submitted to RTAU (but which RTAU and UK government do not endorse), the Portfolio is searchable by:
In RTAU's words, it's about building confidence in AI systems by measuring, evaluating and communicating whether an AI system meets relevant criteria, such as regulation, standards, ethical guidelines and organisational values. It also plays a role in identifying and managing AI risks.
RTAU provides the following examples, each of which will apply to one or more stages of the AI lifecycle, and so may be used on their own or in combination:
Further detail about AI assurance is available in the UK government's Introduction to AI assurance. The development of the Portfolio is a part of the UK's National AI Strategy pillar ‘Governing AI effectively’, and reflects ongoing work by UK government to developing an effective AI governance ecosystem, such as identifying potential barriers and enablers to effective AI governance.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, David Varney, Martin Cook or any other member in our Technology team.
For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.