This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

Preparing for AI Emergencies: Recommendations from the Centre for Long-Term Resilience

Picture of Tom Whittaker
Passle image

On 25 September 2025, the Centre for Long-Term Resilience published a report (here) examining the UK’s preparedness for AI-related security incidents.  The Centre is an independent think tank in the UK with a “mission to transform global resilience to extreme risks”.

The report highlights that as AI becomes more powerful and widely deployed, the risk of major incidents - such as disruption to critical infrastructure or misuse by malicious actors - continues to grow. The authors note that “the UK Government does not have the necessary powers to intervene in a crisis” and that current safeguards, which rely heavily on voluntary commitments from AI companies, are “imperfect and fragile”. 

The Centre makes a number of recommendations relevant to the UK's anticipated AI Bill, which is understood to be in development but for which the scope, content, and a consultation date has not yet been announced.

Key recommendations for the UK AI bill

The Centre proposes a ‘preparedness framework’ for AI security incidents, modelled on the UK’s approach to biological security. This framework is built around four objectives:

  • Anticipation: The government should secure access to information about threats from frontier AI, enabling effective risk assessment and scenario planning.
  • Prevention: Proportionate risk governance and management should be implemented by both government and the private sector to address known threats.
  • Preparation: The readiness of a ‘whole of society’ response should be regularly tested and improved, with participation from leading AI companies.
  • Response: The government should have strong legal powers to direct fast, decisive action from AI companies during an acute incident.

 The report recommends that the UK AI bill should:

  • Introduce incentives and requirements to enhance the government’s capacity for anticipation, prevention, preparation, and response.
  • Include emergency powers allowing the government to obtain information from AI companies, direct their actions, and, if necessary, restrict access to AI models during a crisis.
  • Require AI companies to report serious incidents to the government to facilitate a rapid response.

The authors also call for the publication of a national AI Security Strategy, similar to the UK’s Biological Security Strategy, to ensure a holistic and accountable approach to AI risks. They argue that lessons from the COVID-19 pandemic show the importance of preparing for unlikely but high-impact scenarios.

For further information on AI regulation and incident preparedness, please contact Tom WhittakerBrian Wong, Lucy PeglerMartin CookLiz GriffithsKerry Berchem or any other member of Burges Salmon’s Technology team.

Written by Nathan Gevao

Related services

Related sectors