AI in 2026 – What Does Stanford’s AI Index Tell Us?
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
Stanford University's Human-Centered Artificial Intelligence Institute has published its annual AI Index Report for 2026, tracking the state of AI across research, capability, economy, policy, and public opinion. It is a useful report for providing a comprehensive, global, and data-driven analysis of the current state of AI in areas including: R&D; technical performance; responsible AI; economy; science; medicine; education; policy and governance; public opinion.
Here we summarise key highlights and in other articles we dive deeper on what the report tells us about responsible AI, AI and the economy, and AI and government policy.
AI models now meet or exceed human-level performance on PhD-level science questions, multimodal reasoning, and competition mathematics. On a key coding benchmark, SWE-bench Verified, performance rose from 60% to near 100% in a single year. Industry produced over 90% of notable frontier models in 2025. However, AI models can have relatively poor performance for some relatively simple tasks, such as reading an analog clock; this is the jagged frontier of AI capability.
The performance gap between top US and Chinese AI models has effectively closed, with leads trading multiple times since early 2025. China leads in publication volume, citations, and patent output, while the US retains higher-impact patents and more top-tier models.
Generative AI reached 53% population adoption within three years, faster than the PC or the internet. Organisational adoption rose to 88%. The estimated value of generative AI to US consumers reached $172 billion annually by early 2026.
Documented AI incidents rose to 362 in 2025, up from 233 in 2024. Foundation model transparency declined after improving the previous year. Improving one responsible AI dimension, such as safety, can degrade another, such as accuracy.
Training Grok 4 produced an estimated 72,816 tons of CO₂ equivalent. AI data centre power capacity rose to 29.6 GW, comparable to New York state at peak demand. Annual GPT-4o inference water use alone may exceed the drinking water needs of 12 million people.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Brian Wong, Tom Whittaker, Lucy Pegler, Martin Cook, Liz Griffiths or any other member in our Technology team. For the latest on AI law and regulation, see our blog and newsletter.
Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.
Update your preferred sourcesBe sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.
Follow us