AI assurance: portfolio of techniques published by CDEI

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The Centre for Data Ethics and Innovation (“CDEI”), in collaboration with techUK, has published its Portfolio of AI assurance techniques.
The CDEI states that it will be of use for anybody involved in designing, developing, deploying or procuring AI-enabled systems, and showcases examples of assurance techniques being used in the real-world to support the continued development of trustworthy AI.
This is one of the actions identified in the UK White Paper to 'help innovators understand how AI assurance techniques can support wider AI governance, the government will launch a Portfolio of AI assurance techniques in Spring 2023. The Portfolio is a collaboration with industry to showcase how these tools are already being applied by businesses to real-world use cases and how they align with the AI regulatory principles' (for a flowchart to help navigate UK AI regulation, click here).
Here we draw out the key points.
What is Assurance?
The CDEI defines ‘assurance’ as building confidence in AI by 'by measuring, evaluating and communicating whether an AI system meets relevant criteria' such as
What is in the Portfolio?
The CDEI’s portfolio contains fourteen real-world case studies sourced from multiple sectors and a range of technical, procedural and educational approaches. The CDEI have mapped these techniques to the UK government’s white paper on AI regulation to support wider AI governance. These case studies aren't endorsed by government, but demonstrate the range of possible options that currently exist.
What are the Assurance Techniques?
The CDEI has detailed several techniques:
Different techniques may be used at various stages across the AI lifecycle.
The techniques should also be seen in the context of the wider AI assurance ecosystem, including:
And what techniques are right for a specific AI system and its use will depend on the context. The UK White Paper proposed cross-sectoral principles for AI regulation: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. What they look like in practice will depend upon how regulators interpret their application in their regulatory remit and the context of each AI system.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong or another member of Burges Salmon's Technology team.
The portfolio of AI assurance techniques has been developed by the Centre for Data Ethics and Innovation (CDEI), initially in collaboration with techUK. The portfolio is useful for anybody involved in designing, developing, deploying or procuring AI-enabled systems, and showcases examples of AI assurance techniques being used in the real-world to support the development of trustworthy AI.
https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques