UK AI Security Institute Research Agenda published

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The UK's AI Security Institute (AISI) has published its research agenda which provides a snapshot of its current research priorities for frontier AI risks and solutions (mitigations) and how it aims to have impact (the full details of its methods and objectives cannot be published due to the sensitivity of its work). Here we summarise its agenda.
AISI states that its research programme serves as:
the foundation for the Institute’s work – through our technical teams we’re establishing a rigorous technical understanding of most serious emerging AI risks within government, building the infrastructure, tools, and best practices for assessment. We are also developing solutions and mitigations to these risks, enabling the UK to seize the opportunities presented by AI safely and securely.
The way it does this is through identifying:
AISI's risk research can be categorised into:
AISI's domain specific research, which may change over time, currently focuses on:
- Cyber Misuse: Risks posed by AI systems being used to support or conduct cyber malicious activity on or through cyber systems.
- Dual-use Science: Risks posed by AI systems highly capable at scientific tasks (which has beneficial applications) and associated misuse risks.*
- Criminal Misuse: Risks posed by AI systems being used to support or conduct a range of criminal activities.
- Autonomous Systems: Risks posed by the misuse of AI systems escalating out of control or systems taking harmful action without meaningful human oversight.
- Societal Resilience: Risks that will emerge as frontier AI systems are deployed widely and interact with economic and societal structures.
- Human Influence: Risks posed by AI being used to manipulate, persuade, deceive, or imperceptibly influence humans.
AISI's generalised research currently focuses on:
- Science of Evaluations: Develop and apply rigorous scientific techniques for the measurement of frontier AI system capabilities, so they are accurate, robust, and useful in decision making.
- Capabilities Post-Training: Ensure the AI systems that AISI evaluates demonstrate truly frontier performance (the limit of what’s possible given current technology) in AISI’s focus domains.
AISI's solutions research focuses on mitigating risks. It does so through:
- Conceptual Research: We conduct research on ‘Safety Cases’ across our teams, which are structured arguments for the safety of a system deployed in a specified environment. This work enables us to prioritise our empirical work.
- Empirical Research: We conduct evaluations of technical mitigations, including through adversarial red-teaming to assess their robustness, and determine best practices.
- Promoting External Research: We identify concrete research challenges for AI safety and security experts, and drive academic, non-profit, and industry research through problem books and grants.
AISI summarises its three primary routes to impact as:
- State awareness: We share research findings with key policy decision makers such that they are fully abreast of the state of frontier AI safety and able to make well-targeted policy and governance interventions. We focus on partners within the rest of the UK government, the US government, and national security partners, as well as engage broadly with the Network of AISIs and many international governments.
- International protocols: Working with key partners across government, we distil key research findings into best practices, standards, and protocols for AI safety and security and cohere model developers, deployers, and international actors around them.
- Independent technical partner to labs: We conduct testing exercises on the most advanced models, share research findings and collaborate with frontier model developers to drive targeted safety improvements. For example, we surface concerning capabilities and safeguard vulnerabilities, and we share best practice mitigations against a specific risk.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team. For the latest on AI law and regulation, see our blog and newsletter.
Artificial Intelligence presents an enormous opportunity to the UK. It is at the heart of the UK’s plan to kickstart an era of economic growth, transform how public services are delivered and boost living standards for working people across the country. AI also introduces serious security risks that must be addressed to build public trust and ensure safe adoption. The AI Security Institute (AISI) was set up to equip governments with a scientific understanding of the risks posed by advanced AI. We are the world’s largest government team dedicated to AI safety and security research. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations.