This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

New Updates to the MIT AI Risk Repository

Picture of Victoria McCarron
Passle image

On 4 December 2025, MIT added nine new AI risk frameworks to its AI Risk Repository in pursuit of its “ongoing commitment to maintaining a comprehensive, transparent, and up-to-date resource for understanding risks from artificial intelligence systems”. We previously summarised key updates on the AI Risk Repository here and here. The latest update (Version 4) includes a “mix of government and industry reports and preprints” published between 2024-2025 alongside proposed frameworks for risk mitigation in Generative AI systems. The repository is useful both to help identify specific frameworks that may be applicable and also to compare and contrast different approaches.

We summarise the key updates below.

Key updates

Risk of harm for non-using stakeholders: Several of the new additions to the Repository have identified that Generative AI creates a greater risk of harm for those who have not directly interacted with the AI system, which is “often precipitated by those who do interact directly with Generative AI”. As non-interacting stakeholders generally “do not understand the capabilities, limitations, or risks of Generative AI”, they are more susceptible to believing hallucinations or the outputs of malicious use. In turn, Generative AI developers often avoid the accountability "assumed in many AI governance models". 

Wider scope of risk assessment: Risk frameworks should be considered and implemented both at the developer stage and continually reviewed at all levels of the system’s use. The paper “Dimensional Characterisation and Pathway Modelling” by Ze Shen Chin identifies the need for greater risk assessment during AI use and interaction, as four out of six identified AI risks "lie outside the control of model developers". This includes malicious use of AI after development to aid “CBRN threats, cyber offense, … gradual loss of control, environmental risk, and geopolitical risk”. Risk assessment and mitigation should therefore aim to mirror this reality.

Alignment of AI risk management: To comprehensively address risks in Frontier and Embodied AI systems (systems that “exist in, learn from, reason about, and act in the physical world”), there is a call for "compatible frameworks across the industry" and greater address from regulatory bodies and policymakers to provide a unified risk mitigation approach. The paper “Frontier AI Risk Management Framework” by Shanghai AI Laboratory and Concordia AI comments that “the stakes are too high, and the potential benefits too great, for anything less than our most coordinated and comprehensive response”. 

Human Involvement: Almost all risk frameworks added to the Repository call for greater human oversight and regulation in the risk management Generative AI models, LLMs, Frontier AI, Embodied AI and AI scientists. The study “Risks of AI scientists: prioritising safeguarding over autonomy” states that “while autonomy is an admirable goal and significant for enhancing productivity across various scientific disciplines, it cannot be pursued at the expense of generating serious risks and vulnerabilities".

Operational Frameworks: Another theme displayed amongst the added frameworks is the focus on practical steps that can be implemented to mitigate AI risks. For example, the “Frontier AI Risk Management Framework” outlines step-by-step approaches for managing systemic risks in an AI model lifecycle, while “Dimensional Characterisation and Pathway Modelling” aims to “bridge the conceptual gap” in defining AI risk by providing a structured framework that analyses six catastrophic AI risks in context and outlines corresponding mitigation strategies. This practical focus looks to make AI risk management measurable and auditable.

The AI Risk Repository is frequently updated as a living database and taxonomy of AI risk so further updates can be anticipated.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Griffiths or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

Written by Isabelle Vallis, Victoria McCarron and Liz Griffiths

Related services

Related sectors

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us