This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

Should AI be given legal personality?

Picture of Tom Whittaker

This is a question posed in the Law Commission's discussion paper on AI and the Law.

As the paper states, its purpose is to ‘increase awareness of the potential impacts of AI on the law and to encourage discussion of these issues as a step towards future law reform, where it is required.'  The paper discusses a range of potential legal issues raised by AI. It does not propose legal reforms. 

However, it does ask the question whether AI should be given legal personality. It notes that a key difficulty is the challenge in identifying a natural or legal person to be responsible for AI systems and, ‘…given the rapid pace of AI development, and the potentially increasing rate of pace of development, it is pertinent to consider whether AI legal personality requires further discussion now, in the event that such highly advanced AI arrives in the near future.’

What is legal personality?

Having legal personality can be described as having a bundle of rights and obligations, such as the ability to own property, to enter contracts, and to sue and be sued in the legal person's own name.  A company, for example, has separate legal personality to the natural persons who run it.

The paper notes that legal personality has been given to a range of entities, such as a river in New Zealand and temples in India.

Pros and cons

The paper notes that foreseeing the potential pros and cons is difficult.  In order to give a glimpse as to what some may be, the paper notes that:

  • Pros potentially include making it easier to identify who is liable and responsible when an AI system causes harm, encouraging AI innovation and research by separating AI system liability from the developer's liability, and encouraging AI systems to develop safely (e.g. by incentivising the system to avoid liability).
  • cons potentially include protecting developers from liability when they should be liable, and also holding an AI system to account. 

What are some of the issues to be considered?

  • which AI systems, or types of systems, should be granted legal personality.  Where should the line be drawn and how? The paper notes that some have suggested criteria could be used, such as autonomy, awareness, and intentionality.
  • what ‘bundle of rights and obligations’ should an AI system be granted? What would need to happen for that to be permitted? For example, for a company to gain limited liability status in England it must be registered, disclose the names of its directors and people with significant control, and file annual accounts.
  • what mechanism would be in place so that an AI system could be subject to sanction were it to commit a criminal offence?
  • how would legal questions related to the AI system's actions be determined? For example, would the AI system be required to act with reasonable skill and care, and, if so, by what standard would it be measured?

What will be interesting to see is how this question is now picked up. For example, will it be raised in House of Lords debates on potential AI regulation? Will it be picked up in the anticipated government consultation on a UK AI Bill? Time will tell.

If you would like to discuss how current or future regulations impact what you do with AI, please contact  Brian WongTom WhittakerLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

Related services

Related sectors