Trusted Third-Party AI Assurance – DSIT Roadmap

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
In September 2025, the UK's Department for Science, Innovation & Technology (“DSIT”) published a policy paper titled “Trusted Third-Party AI Assurance Roadmap” setting out the UK Government’s vision for developing a credible and scalable AI assurance market in response to industry challenges. We summarise the key points here.
Scope and Purpose of the Roadmap
DSIT’s roadmap sets out the government’s ambition to grow a credible third-party AI assurance market and ensure AI is developed and deployed safely.
The UK’s AI assurance market is nascent but growing, with over 524 companies contributing around £1.01 billion in value as of 2024. DSIT projects this market could reach £18.8 billion by 2035 if key obstacles to AI adoption and assurance are addressed. Leveraging the UK’s strengths in professional services and technology, the government sees a unique opportunity to lead in AI assurance services globally.
The roadmap’s purpose is to identify the hurdles facing third-party AI assurance providers and to outline immediate government actions to overcome those hurdles, thereby unlocking the market’s potential and building public trust in AI.
Key Challenges
The DSIT paper highlights four key challenges in the market that must be overcome to build a trusted third-party AI assurance market:
Proposed solutions
To address the above challenges, the DSIT roadmap outlines several targeted initiatives:
In formulating the roadmap, DSIT evaluated three possible models to improve quality in the AI assurance market.
Process certification and firm accreditation are seen as longer-term goals but are secondary to the requirement set out to establish a robust professional certification framework. The government’s stance is to start by building professional capacity and voluntary standards now, while remaining open to more formal certification or accreditation schemes as the industry develops. This phased approach aims to incrementally improve quality and nurture a robust third-party assurance framework without providing too much red tape in this growing industry.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Brian Wong, Tom Whittaker, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team. For the latest on AI law and regulation, see our blog and newsletter.
This article was written by Zac Bourne and Tia Leader.