This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

MIT publishes AI Agent Index

Picture of Tom Whittaker
Passle image

According to MIT, it has published ‘the first public database to document information about currently deployed agentic AI systems’ - see the index here and associate paper here ([2502.01635] The AI Agent Index).  In summary, AI agents are AI systems that can plan and execute complex tasks with limited human involvement.  The aim is to fill the gap that there is ‘currently no structured framework for documenting the technical components, intended uses, and safety features of agentic AI systems.

What is an AI agent?

According to MIT, there is no agreed upon definition. MIT has used the characteristics: underspecification, directness of impact, goal-directedness, and long-term planning. These were characteristics of agency from a paper on Harms from Increasingly Agentic Algorithmic Systems (https://arxiv.org/abs/2302.10329).

What is in the index?

AI systems identified are included identified through public information, such as via web searches, academic literature review, benchmark leaderboards, and additional resources that compile lists of agentic systems. The index is a snapshot as of 31 December 2024.

The index includes:

  • the system's components (e.g., base model, reasoning implementation, tool use),
  • application domains (e.g., computer use, software engineering), and 
  • risk management practices (e.g., evaluation results, guardrails)

Key findings

According to the launch, the key findings are:

Agentic AI systems are being deployed at a steadily increasing rate. While some systems in the index were (initially) deployed in early 2023, approximately half of the systems were deployed in the second half of 2024.

The majority of indexed agents specialize in software engineering and/or computer use. We divided the 67 agents into 6 categories: “software,” “computer use,” “universal,” “research,” “robotics,” and “other”.

There is limited information about the risk management practices of developers of agentic systems. This includes their safety policies, internal testing, and external testing.

Further, according to the associated paper, the authors find that ‘while developers generally provide ample information regarding the capabilities and applications of agentic systems, they currently provide limited information regarding safety and risk management practices’. 

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

Related sectors