US NIST launches Artificial Intelligence Risk Management Framework

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The National Institute of Standards and Technology (NIST), a US Department of Commerce agency and one of the leading voices in the development of AI standards, has announced the launch of its Artificial Intelligence Risk Management Framework (the Framework).
NIST’s Framework will influence future US legislation and global AI standards. In this article we provide a high-level summary of the Framework.
What is the NIST Risk Management Framework?
NIST were required by the National Artificial Intelligence Initiative Act of 2020 to produce a resource to ‘organisations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems’. The Framework seeks to cultivate trust in AI technologies and to support organisations to ‘prevent, detect and mitigate AI risks.’
The Framework:
How is the Framework applied?
The Framework is divided into two parts. The first part discusses how organisations can understand the risks related to AI and describes the intended audience. It then goes on to outline the characteristics of trustworthy AI systems.
The second part, which forms the core of the framework, describes four specific functions to help organisations address the risks of AI systems in practice:
NIST recommends that the Framework be applied at the beginning of the AI lifecycle and that diverse groups of internal and external stakeholders should be involved in ongoing risk management efforts.
NIST has also released a companion voluntary Framework Playbook, which suggests ways to navigate and use the Framework.
The Framework is relevant outside the US
The Framework should be seen as part of wider international efforts to address the potential benefits and risks of AI. Whilst each are taking different approaches to legislation (as can be seen in our horizon scanning piece), the US and EU have signed a number of agreements to develop trustworthy AI (for example, in this September 2021 EU-US Trade and Technology Council statement). The EU will pay close attention to the Framework as part of its own standards organisations setting AI-related technical standards.
In any event, NIST’s work is influential in both the US and internationally, and there is every reason to expect its Framework to help set standards globally. The EU AI Act includes requirements for high-risk AI systems, including around risk management systems and governance. Those requirements must take into account the ‘state of the art’ which may well include, amongst other things, international standards for managing risks associated with AI.
If you would like to discuss how current or future regulations and guidance impact what you do with AI, please contact Tom Whittaker or Brian Wong.
This article was written by Eve Hayzer.