Global AI Index 2023: AI regulation

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The AI Index Report 2023 has been published by Stanford University for a sixth year, with global analysis of trends in areas such as AI R&D, ethics, economy, education, policy and public opinion.
As the EU AI Act progresses and the the UK government published its White Paper with proposals and next steps on AI regulation, it is useful to take a global, longer-term view on how AI has been developing. Here we identify some of the key takeaways relevant to AI regulation.
"Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, computer power, and money—resources that industry actors inherently possess in greater amounts compared to nonprofits and academia"
"Global AI private investment was $91.9 billion in 2022, which represented a 26.7% decrease since 2021. The total number of AI-related funding events as well as the number of newly funded AI companies likewise decreased. Still, during the last decade as a whole, AI investment has significantly increased. In 2022 the amount of private investment in AI was 18 times greater than it was in 2013."
"The proportion of companies adopting AI in 2022 has more than doubled since 2017, though it has plateaued in recent years between 50% and 60%, according to the results of McKinsey’s annual research survey. Organizations that have adopted AI report realizing meaningful cost decreases and revenue increases."
This is of note to governments and regulators:
"According to the AIAAIC [Algorithmic, and Automation Incidents and Controversies] database, which tracks incidents related to the ethical misuse of AI, the number of AI incidents and controversies has increased 26 times since 2012. Some notable incidents in 2022 included a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering and U.S. prisons using call-monitoring technology on their inmates. This growth is evidence of both greater use of AI technologies and awareness of misuse possibilities".
The AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) Repository is an independent, open, and public dataset of recent incidents and controversies driven by or relating to AI, algorithms, and automation.
The number of newly reported AI incidents and controversies in the AIAAIC database was 26 times greater in 2021 than in 2012 (figures for 2022 are not yet available as incidents are vetted before publication). The report also notes that historic incidents may be under-reported; this is an issue identified in an analysis of cancelled algorithmic-decision making projects in the public sector by Cardiff University. Examples are across sectors and jurisdictions, including using AI to monitor prison inmate telephone calls in the US, and risk profiling gang members in London. The range of AI uses and risks are reasons for growing national and international regulatory interest in AI.
"An AI Index analysis of the legislative records of 127 countries shows that the number of bills containing “artificial intelligence” that were passed into law grew from just 1 in 2016 to 37 in 2022. An analysis of the parliamentary records on AI in 81 countries likewise shows that mentions of AI in global legislative proceedings have increased nearly 6.5 times since 2016."
"Of the 127 countries analyzed, since 2016, 31 have passed at least one AI-related bill, and together they have passed a total of 123 AI-related bills (Figure 6.1.1). Figure 6.1.2 shows that from 2016 to 2022, there has been a sharp increase in the total number of AI-related bills passed into law, with only one passed in 2016, climbing to 37 bills passed in 2022"
The analysis is based on references to 'artificial intelligence' within legislation (proposed or enacted) or in legislative debates. However, not all references to artificial intelligence are equal: some are in the context of legislation which are focused on AI, some not; some relate to legislation which may have significant impact, some not. They vary between whether they affect all public institutions or some, are at federal or state level, and across sectors.
Examples in the report reflect this:
At the very least, the increasing number of references to artificial intelligence in proposed and enacted legislation reflect the growing role AI has within all parts of society. This is reflected in other areas, for example:
The extent to which proposed legislation and policy will impact an organisation depends on the proposed laws and policies, the organisation, and their sector. Identifying global trends is useful for the big picture but understanding impact on a specific organisation requires tailored analysis.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.
Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023.
"The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI. The report aims to be the world’s most credible and authoritative source for data and insights about AI."