AI Agent Standards – NIST launches initiative
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The National Institute of Standards and Technology (“NIST”) has launched its AI Agent Standards Initiative, aiming to establish clearer, more consistent foundations for agentic AI. We can expect further research, guidelines, and content in the future. Here, we summarise key points about what NIST is doing.
What is Agentic AI?
Agentic AI refers to systems that can plan and carry out tasks with a degree of autonomy independent from human input and operating for an extended period, in contrast to producing a single response to a single input. For organisations, this offers potential meaningful efficiency gains – reducing manual workload, improving response times, integrating multiple data sources, and supporting smoother digital operations across teams and functions.
However, there are also issues, such as:
For instance, an agent designed to accept meeting invitations might be misinterpreted by another system as having permission to action formal approvals or filings.
What NIST is doing
NIST is a U.S. federal agency that develops widely used technical standards to support secure and reliable technology.
To address the identified gaps in the adoption and continued use of agentic AI, NIST’s Centre for AI Standards and Innovation has launched an initiative built around three strategic pillars:
Together, these workstreams are designed to create a clear, consistent framework that helps agentic AI operate safely, reliably and in a way that different organisations and technologies can confidently adopt.
Alongside the initiative, NIST’s National Cybersecurity Centre of Excellence has published a Concept Paper on Software and AI Agent Identity and Authorisation. The paper outlines how existing standards – including OAuth, OpenID Connect, SPIFFE/SPIRE and Zero‑Trust principles – could be adapted to ensure AI agents are properly identified, authenticated and authorised.
Next Steps
Whilst the initiative remains in its infancy, NIST is to announce further research, guidelines, and further deliverables to engender further stakeholder development to inform the progress of the initiative and any frameworks that are to be developed.
Although NIST is a U.S. body, NIST frameworks are adopted by organisations internationally, such as NIST's AI risk management frameworks for AI and genAI (Generative AI: US NIST publishes risk management framework - Burges Salmon).
NIST's work signals the direction of travel: as agentic AI becomes more capable, identity, access control and accountability will become central to secure deployment. New standards, frameworks, and processes will be needed given the different nature, use and risk profile of agentic AI.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Brian Wong, Tom Whittaker, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team. For the latest on AI law and regulation, see our blog and newsletter.
This article was written by Zac Bourne and Lewis Osborne.
Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.
Update your preferred sourcesBe sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.
Follow us