This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

Policymaking for Frontier AI in 2030 – DSIT publishes AI 2030 Scenarios report

Picture of Tom Whittaker
Passle image

The Department for Science, Innovation and Technology has released the AI 2030 Scenarios Report which focuses on five potential scenarios involving Frontier AI so that policymakers can explore areas of uncertainty and identify resilient courses of action. Here we summarise the key points.

The Challenge

As the report explains, policymakers face a challenge:

…there is significant uncertainty around what the future holds, in particular at the frontier. We don’t know what the most advanced models will be capable of, who will own them, how safe they will be, how people and businesses will use them, and what the geopolitical context will be. These uncertainties will interact in unpredictable ways to create a specific future in which policy makers will operate.

According to the report, Frontier AI models are those which are most capable generally applicable, creating a new, uncertain dynamic due to the pace of their improvement, their adaptability across multiple tasks, and their availability to anyone to interact with in natural language.

Key findings

The report’s key findings include:

  • There is the potential for widespread positive societal impact but that is likely only possible with policymaker action;
  • New risks will likely emerge whilst risks we know of today may have potential for greater impact and scale;
  • Frontier AI operating across multiple applications will make performance and safety evaluation difficult, but narrow applications will also present (different) policy challenges;
  • Large technology companies may (but won’t definitely) continue to hold ‘a huge amount of power’;
  • Negative impacts may be caused due to a range of factors including the technology, bad actors, an ineffective safety systems;
  • Some policy interventions are likely to help: addressing bias; AI literacy; international collaboration;
  • There will be choices and trade-offs;
  • The public is concerned about safety and views government and regulators are responsible for ensuring the safe development and use of AI.

Uncertainties

The scenarios are built around five key areas of uncertainty: 

Capability: What ability will AI systems have to successfully achieve a range of goals? Will this include interaction with the physical world? How quickly will the performance and range of capabilities increase over time?

Ownership, access, and constraints: Who controls systems? How accessible are they? What infrastructure and platforms are used to deploy systems? What constraints are there on the availability of AI systems?

Safety: Can we build safe AI-based systems, assuring their validity and interpretability? How robust are systems to changes in deployment context? How successfully does system design ensure AI behaviour aligns to societal values?

Level and distribution of use: How much will people and businesses use AI systems? What for and why? Will they be consciously aware they are using AI, or not? How will people be affected by AI misuse? How will use affect the education and jobs people do?

Geopolitical context: What wider developments have there been at a global level that will influence AI development and use? Will there generally be more cooperation on big issues, or more conflict?

The Scenarios

The Report presents five case study futures exploring the opportunities challenges presented by the growth and evolution of Frontier AI by 2030. These scenarios are not predictions, but strategic tools designed to help policymakers explore uncertainty and test ideas. 

  • Scenario 1: Unpredictable Advanced AI Highly capable but unpredictable open source models are released. Serious negative impacts arise from a mix of misuse and accidents. There is significant potential for positive benefits if harms can be mitigated.
  • Scenario 2: AI Disrupts the Workforce Capable narrow AI systems controlled by tech firms are deployed across business sectors. Automation starts to disrupt the workforce. Businesses reap the rewards, but there is a strong public backlash.
  • Scenario 3: AI “Wild West” A wide range of moderately capable systems are owned and run by different actors, including authoritarian states. There is a rise in tools tailored for malicious use. 
  • Scenario 4: Advanced AI on a Knife’s Edge Systems with high general capability are rapidly becoming embedded in the economy and peoples’ lives. One system may have become so generally capable that it is impossible to evaluate across all applications.
  • Scenario 5: AI Disappoints AI capabilities have improved somewhat, but big labs are only just moving beyond advanced gen AI. Investors are disappointed and looking for the next big development. There is a mixed uptake across society.

How the uncertainties maps against each is shown as follows:

A chart of different types of data

AI-generated content may be incorrect.

 

There are limitations to the report – the evidence may not be comprehensive as it was gathered ‘swiftly’ for the AI Safety Summit in November 2023, AI technologies develop at pace so the scenarios may become dated, and they may not be applicable to all policy contexts.  However, the factors and scenario planning remain useful as part of risk identification and strategy planning.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

This article was written by Zac Bourne and Hidayah Ismail.

Related sectors