OECD publishes Due Diligence Guidance for Responsible AI
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
In February 2026, the Organisation for Economic Co-operation and Development (OECD) published its Due Diligence Guidance for Responsible AI. This guidance is designed for multinationals developing and using AI, implementing a framework that supports the OECD's responsible business conduct (RBC) and AI principles. However, it is also of broader use, providing examples of practical risk management implementation and ways to navigate risk management frameworks.
Purpose of the guidance
The guidance supports the implementation of two key OECD instruments:
The MNE Guidelines require organisations to carry out risk‑based due diligence to identify and manage actual or potential adverse impacts, whether or not related to AI. While many national and international AI risk frameworks already exist, the OECD takes a broader “risk‑agnostic” approach. By not tying the guidance to specific metrics or categories, it can be applied across all enterprises and AI systems and remain relevant as research and regulation evolve.
The guidance is aimed at multinational organisations operating across the AI value chain, including those supplying inputs for AI development, participating in the AI system lifecycle, or using AI systems within their products and operations.
The OECD identifies four key objectives for the guidance:
The Six‑Step Due Diligence Framework
Reflecting the structure of the MNE Guidelines, the OECD sets out a voluntary six‑step due diligence process:
These steps are intended to be undertaken simultaneously and continuously, forming part of an ongoing risk‑management cycle rather than a one‑off exercise.
The guidance is more detailed than we can cover in this short overview. It explains that it:
…lays out the RBC due diligence framework and practical implementation examples for enterprises involved in the development and use of AI systems. The due diligence framework presented in this guidance also features a roadmap of related provisions in existing frameworks at the beginning of each step indicating how each step of the due diligence framework is complemented by and relate to relevant provisions from related AI risk management frameworks.
For organisations looking at how to manage risks associated with AI, the guidance is one to consider alongside other methods for structuring responsible AI frameworks, to help understand what the right structure and approach is for them.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Brian Wong, Tom Whittaker, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team. For the latest on AI law and regulation, see our blog and newsletter.
This article was written by Zac Bourne and Sharon Osborne.
The guidance is: OECD (2026), OECD Due Diligence Guidance for Responsible AI, OECD Publishing, Paris, https://doi.org/10.1787/41671712-en.
Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.
Update your preferred sourcesBe sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.
Follow us