This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website
Thought Leadership

AI Agent Standards – NIST launches initiative

Picture of Tom Whittaker
Passle image

The National Institute of Standards and Technology (NIST) has launched its AI Agent Standards Initiative, aiming to establish clearer, more consistent foundations for agentic AI.  We can expect further research, guidelines, and content in the future. Here, we summarise key points about what NIST is doing. 

What is Agentic AI?

Agentic AI refers to systems that can plan and carry out tasks with a degree of autonomy independent from human input and operating for an extended period, in contrast to producing a single response to a single input. For organisations, this offers potential meaningful efficiency gains – reducing manual workload, improving response times, integrating multiple data sources, and supporting smoother digital operations across teams and functions.

However, there are also issues, such as:

  • Interoperability gaps – There are currently no shared technical standards that allow different agents, tools or platforms to interact consistently. Systems may be built differently, leading to breakdowns when agents try to coordinate tasks across environments.
  • Trust and permission issues – Organisations also need confidence that an agent will act within the correct boundaries and that one knows that an agent is acting on behalf of the entity stated with the correct permissions. Without common rules for identity and authorisation, AI agents may have unrestricted and unmonitored access to corporate data. This provides for greater operational risk including data vulnerabilities and exploitation.

For instance, an agent designed to accept meeting invitations might be misinterpreted by another system as having permission to action formal approvals or filings. 

What NIST is doing

NIST is a U.S. federal agency that develops widely used technical standards to support secure and reliable technology. 

To address the identified gaps in the adoption and continued use of agentic AI, NIST’s Centre for AI Standards and Innovation has launched an initiative built around three strategic pillars:

  1. Facilitating Industry-led Standards – increased industry and stakeholder engagement and consultation to develop further  voluntary guidelines to inform industry-led standardisation for agentic AI.
  2. Fostering Community-led Protocols – Community‑driven open‑source protocol development to enable agents to communicate across platforms
  3. Investing in Research – Research into security, identity and authorisation, creating safer ways for agents to act on behalf of users.

Together, these workstreams are designed to create a clear, consistent framework that helps agentic AI operate safely, reliably and in a way that different organisations and technologies can confidently adopt. 

Alongside the initiative, NIST’s National Cybersecurity Centre of Excellence has published a Concept Paper on Software and AI Agent Identity and Authorisation. The paper outlines how existing standards – including OAuth, OpenID Connect, SPIFFE/SPIRE and Zero‑Trust principles – could be adapted to ensure AI agents are properly identified, authenticated and authorised.

Next Steps

Whilst the initiative remains in its infancy, NIST is to announce further research, guidelines, and further deliverables to engender further stakeholder development to inform the progress of the initiative and any frameworks that are to be developed.

Although NIST is a U.S. body, NIST frameworks are adopted by organisations internationally, such as NIST's AI risk management frameworks for AI and genAI (Generative AI: US NIST publishes risk management framework - Burges Salmon).

NIST's work signals the direction of travel: as agentic AI becomes more capable, identity, access control and accountability will become central to secure deployment. New standards, frameworks, and processes will be needed given the different nature, use and risk profile of agentic AI. 

If you would like to discuss how current or future regulations impact what you do with AI, please contact  Brian WongTom WhittakerLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

This article was written by Zac Bourne and Lewis Osborne.

Related services

Related sectors

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us