Preparing for AI Emergencies: Recommendations from the Centre for Long-Term Resilience
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
On 25 September 2025, the Centre for Long-Term Resilience published a report (here) examining the UK’s preparedness for AI-related security incidents. The Centre is an independent think tank in the UK with a “mission to transform global resilience to extreme risks”.
The report highlights that as AI becomes more powerful and widely deployed, the risk of major incidents - such as disruption to critical infrastructure or misuse by malicious actors - continues to grow. The authors note that “the UK Government does not have the necessary powers to intervene in a crisis” and that current safeguards, which rely heavily on voluntary commitments from AI companies, are “imperfect and fragile”.
The Centre makes a number of recommendations relevant to the UK's anticipated AI Bill, which is understood to be in development but for which the scope, content, and a consultation date has not yet been announced.
Key recommendations for the UK AI bill
The Centre proposes a ‘preparedness framework’ for AI security incidents, modelled on the UK’s approach to biological security. This framework is built around four objectives:
The report recommends that the UK AI bill should:
The authors also call for the publication of a national AI Security Strategy, similar to the UK’s Biological Security Strategy, to ensure a holistic and accountable approach to AI risks. They argue that lessons from the COVID-19 pandemic show the importance of preparing for unlikely but high-impact scenarios.
For further information on AI regulation and incident preparedness, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Griffiths, Kerry Berchem or any other member of Burges Salmon’s Technology team.
Written by Nathan Gevao