MIT has published a draft AI Risk Mitigation Taxonomy. It was created by identifying and extracting mitigations from 13 frameworks that proposed AI risk mitigations into an AI Risk Mitigation Database of 831 mitigations organised into 4 top-level categories and 23 sub-categories.
In summary, the 4 top-level categories are:
- Governance and Oversight Controls
- having clear structures and policies in place to ensure people stay in control of the use of AI, make responsible decisions and manage risks throughout its development and use. Its subcategories include board structure and oversight, conflict of interest protections and whistle-blower reporting and protection.
- Technical and Security Controls
- includes having built-in protections to ensure AI systems are safe, secure and align with human values, alongside producing trustworthy content. Amongst the mitigation subcategories is model and infrastructure security which focuses on having technical and physical safeguards to secure AI models and infrastructure to prevent unauthorised access, theft, tampering and espionage.
- Operational Process Controls
- looks at the systems and processes that guide how AI is used, monitored and managed. This helps ensure safety, security and accountability throughout the system lifecycle. Within the five mitigation subcategories is testing and auditing which looks at the internal and external evaluations assessing AI systems, infrastructure and compliance to allow for the identification of risks, verification of safety and ensuring standards are met.
- Transparency and accountability Controls
- seeks to ensure there are clear ways of sharing and checking AI system information to allow external review - helping to build trust, support oversight and ensure accountability to users, regulators and the public.
Looking forward, the MIT Risk Repository project intends to conduct a systematic review of mitigation frameworks and is seeking feedback on the draft taxonomy. Further, the report recognises other potential further work, such as addressing conceptual overlap between some mitigations, mapping risks by actors, and identifying organisational conditions that reduce AI risks.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team. For the latest on AI law and regulation, see our blog and newsletter.