This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

Trusted Third-Party AI Assurance – DSIT Roadmap

Picture of Tom Whittaker
Passle image

In September 2025, the UK's Department for Science, Innovation & Technology (“DSIT”) published a policy paper titled “Trusted Third-Party AI Assurance Roadmap” setting out the UK Government’s vision for developing a credible and scalable AI assurance market in response to industry challenges. We summarise the key points here.

Scope and Purpose of the Roadmap

DSIT’s roadmap sets out the government’s ambition to grow a credible third-party AI assurance market and ensure AI is developed and deployed safely. 

The UK’s AI assurance market is nascent but growing, with over 524 companies contributing around £1.01 billion in value as of 2024. DSIT projects this market could reach £18.8 billion by 2035 if key obstacles to AI adoption and assurance are addressed. Leveraging the UK’s strengths in professional services and technology, the government sees a unique opportunity to lead in AI assurance services globally. 

The roadmap’s purpose is to identify the hurdles facing third-party AI assurance providers and to outline immediate government actions to overcome those hurdles, thereby unlocking the market’s potential and building public trust in AI.

Key Challenges

The DSIT paper highlights four key challenges in the market that must be overcome to build a trusted third-party AI assurance market:

  1. Quality of AI assurance: Although technical standards set the pace for any quality assurance, AI systems, and consequently assurances, are still developing; this makes it unclear what standards AI assurance should be held to. Under the current framework, existing certifications for AI assurance are not accredited by the UK Accreditation Service
  2. Shortage of talent: – Current UK assurance providers report difficulty in finding qualified persons. Highlighted competencies include AI/machine learning, law and ethics, data governance, and technical standards. 
  3. Information access: – Assurance service providers have identified a lack of access to information, required to develop AI systems – for example, access to training data, model details, or documentation of an AI’s governance and performance. Companies deploying AI may be reluctant to share data due to commercial confidentiality, security concerns, or simply not recognising what auditors need. Without clear guidance, firms may err on the side of withholding information.
  4. Innovation: There are limited forums for collaborative research on AI assurance. The policy paper notes that continual assurance techniques and tools is required in order to keep up with the development of AI. 

Proposed solutions 

To address the above challenges, the DSIT roadmap outlines several targeted initiatives:

  1. AI Assurance Professionalism: Proposals for a UK multi-stakeholder consortium to develop a voluntary code of ethics and a competency framework, certification and accreditation scheme for AI assurance professionals. This consortium is prescribed to include professional bodies and industry stakeholders. 
  2. Developing Skills: Clearer academic and training pathways and a review of existing programmes to support entry into AI assurance roles. The new consortium will map out the specific knowledge, skills, and training needed for AI assurance roles, seeking to engender greater industry diversity by articulating the opportunities in AI assurance.
  3. Information sharing guidelines: Development of best practice guidelines for information access setting expectations for what data to share, early on, to facilitate effective assurance. 
  4. Innovation and collaboration: Announcement of an AI Assurance Innovation Fund of £11 million to support research, prototypes, and pilot projects for new AI assurance tools and methods. This investment signifies wider public sector commitment in the investment into AI governance and assurance tools, in order to foster a continuous pipeline of innovation to meet the evolving AI landscape.

In formulating the roadmap, DSIT evaluated three possible models to improve quality in the AI assurance market.

  • Professional certification/registration for individuals 
  • Certification of AI assurance processes; and 
  • Accreditation of AI assurance firms 

Process certification and firm accreditation are seen as longer-term goals but are secondary to the requirement set out to establish a robust professional certification framework. The government’s stance is to start by building professional capacity and voluntary standards now, while remaining open to more formal certification or accreditation schemes as the industry develops. This phased approach aims to incrementally improve quality and nurture a robust third-party assurance framework without providing too much red tape in this growing industry.

If you would like to discuss how current or future regulations impact what you do with AI, please contact  Brian WongTom WhittakerLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

This article was written by Zac Bourne and Tia Leader.

Related services

Related sectors