This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

Professional conduct and AI – Ayinde v Haringey

Picture of Tom Whittaker
Passle image

When is an AI case not an AI case? When it’s a case about the conduct of those using AI.  

A lot has been written about Ayinde v Haringey [2025] EWHC 1383 (Admin) with a focus on AI.  However, the focus should instead be on those who use AI.  In this article, we first explain the conduct issues, and then the practical issues for litigators.

The cases

The decision is that of the Divisional Court which considered two cases referred to it by lower courts and listed together under the court's "inherent power to regulate its own procedures and to enforce duties that lawyers owe to the court" (known as the Hamid jurisdiction).

In summary:

Ayinde case

This concerned judicial review proceedings. The claimant's barrister submitted grounds with citations which did not exist and a summary of legislation which was not correct. The defendant's solicitor raised this.  The claimant's solicitor and paralegal asked the barrister for copies of the cases cited but did not receive them. At the same time, the barrister drafted a response to the defendant, reviewed and sent by the claimant's solicitor, stating (in summary) that citations to non-existent cases was a ‘cosmetic’ issue.

In the underlying case and the Divisional Court it was not proved that the barrister used GenAI. The barrister denied doing so. However, the Divisional Court stated that on the evidence before it, the barrister had “not provided to the court a coherent explanation for what happened”.  It also transpired that the barrister had made submissions previously which included false citations, but the matter had not been referred to the regulator due to assurances made by the barrister and the barrister's head of chambers. 

Al-Haroun case

This case was for damages for an alleged breach of a financing agreement.

The solicitor relied on the legal research of his client (the claimant) without independently verifying it. The client's research was based on citations generated using publicly available artificial intelligence tools, legal search engines and online sources.

The claimant and solicitor apologised to the court and made clear they did not intend to mislead. The Divisional Court decided that the threshold for contempt proceedings against the solicitor was not met, but that it would refer him to the SRA. 

Conduct issues

The facts of the cases include alleged use of AI (Ayinde) or actual use of AI (Al-Haroun).  However, whether or not AI was involved, they are significant conduct issues, for example: a solicitor relying upon, and failing to verify, their client’s work (Al-Haroun); a solicitor and barrister failing to check what is being put before court and downplaying the impact of getting it wrong, and a barrister failing to provide a coherent explanation of what happened (Ayinde).  

Professor Moorhead (Professional Ethics, University of Exeter Law School) has written about the Ayinde case, arguing that the underlying Ayinde case is not about AI, and the Divisional Court case is still not an AI case.  Professor Moorhead concludes that:

This is a case of unprofessionalism, of wing it and wangle it, of ignoring rules and deadlines. Bluffing it when fake law is pointed out is but a hop and a skip in this culture. The case may or may not tell us something about AI, but what it plainly does tell us is that when lawyers can’t be trusted it has more to do with them, their cultures and practices, the structures and cultures they work within, than it has to do with ChatGPT. This was not an instant reaction in a moment of panic. Or a hallucination. It was a series of very bad decisions for which there are likely to be long-term consequences unless the judge has got something wrong.

Practical points for litigators

Despite the significant risks to the administration of justice in a case and more generally due to the use of AI, the Divisional Court also made clear early in the judgment that:

Artificial intelligence is a powerful technology. It can be a useful tool in litigation, both civil and criminal. It is used for example to assist in the management of large disclosure exercises in the Business and Property Courts. … Artificial intelligence is likely to have a continuing and important role in the conduct of litigation in the future.

The context - including professionals understanding, being supported with, and complying with their obligations - is therefore important: 

Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained. As Dias J said when referring the case of Al-Haroun to this court, the administration of justice depends upon the court being able to rely without question on the integrity of those who appear before it and on their professionalism in only making submissions which can properly be supported.

These are sentiments echoed in Lady Chief Justice’s speech to Mansion House (3 July 2025):

I am confident that with time, training and cautious incrementalism,  the use of AI by lawyers and by judges will be as beneficial as it is inevitable. That does not mean, however, that we can or will be complacent. The legal profession needs to be ever-vigilant and robust in its approach to the use of AI. That means careful oversight by legal services regulators and more training and support for lawyers, particularly trainees and those in the early years of their careers, to enable them to use AI circumspectly and usefully. I am sure that we can and will learn from recent experiences, and that AI can be used appropriately as a tool to assist lawyers and judges to promote fairer, more efficient and effective access to the law and justice. As I have heard it said, I want AI to do my laundry, so that I can do art.

Now we turn to the explicit explanations the court provided in Ayinde v Haringey and the practical points arising.

Obligations

Barristers and solicitors are under a range of regulations and obligations, setting out the standards expected of them when conducting litigation and supervision of work. The court sets this out clearly, but there is too much to set out here; see judgment paragraphs 17-22.  

Regulators have provided guidance supporting those regulations (e.g. SRA, BSB, for judicial office holders).  We can expect, potentially soon, further actions from regulators. The Divisional Court sent a copy of the judgment to regulators, inviting them to “consider as a matter of urgency what further steps they should now take in the light of this judgment.”  Further, we can also expect outputs from working groups, such as the recently established Civil Justice Council AI working group and the TCC AI working group, and potential for updates to procedural rules (see ‘Fake’ citations: Civil Justice Council sets up AI working group | Law Gazette)

The judgment makes clear that, based on those obligations: 

Those who use artificial intelligence to conduct legal research notwithstanding these risks have a professional duty therefore to check the accuracy of such research by reference to authoritative sources, before using it in the course of their professional work (to advise clients or before a court, for example).

This duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so.

There are two points to note.

  1. First, reference is made to use of AI to conduct legal research. AI can also be used for many other parts of legal practice; for example, the judgment recognises that the barrister in the Ayinde case could also have used AI in drafting submissions. As such, this statement could be read to apply more broadly than use of AI solely for legal research. 
  2. Second, it will not always be known to those supervising others that those they rely on have (or have not) used AI. Practically, that means supervisors will continue to need to exercise care when checking and supervising others' work. 

How lawyers and barristers are trained and supervised will be important. The judgment continued:

practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services. Those measures must ensure that every individual currently providing legal services within this jurisdiction (whenever and wherever they were qualified to do so) understands and complies with their professional and ethical obligations and their duties to the court if using artificial intelligence. For the future, in Hamid hearings such as these, the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled.

However, the judgment states that creating and circulating guidance is insufficient to address the misuse of AI.  The court said: “More needs to be done to ensure that the guidance is followed and lawyers comply with their duties to the court.”

Court's powers

The judgment made clear that the court has a range of powers to ensure that lawyers comply with their duties to the court:

Where those duties are not complied with, the court's powers include public admonition of the lawyer, the imposition of a costs order, the imposition of a wasted costs order, striking out a case, referral to a regulator, the initiation of contempt proceedings, and referral to the police.

Its exercise of those powers depends on the facts of the case. Relevant factors are likely to include (i.e. are not limited to): 

  1. the importance of setting and enforcing proper standards; 
  2. the circumstances in which false material came to be put before the court; 
  3. whether an immediate, full and truthful explanation is given to the court and to other parties to the case; 
  4. the steps taken to mitigate the damage, if any; 
  5. the time and expense incurred by other parties to the case, and the resources used by the court in addressing the matter; 
  6. the impact on the underlying litigation and 
  7. the overriding objective of dealing with cases justly and at proportionate cost.

However, this depends on the court.  For example, in the Trademark Decision of the Intellectual Property Office BL O/0559/25, the Ayinde decision was referred to positively but it was noted that the IPO does not have all the same powers as the Divisional Court. 

How did this apply in practice?  In the Ayinde case:

  • Whilst the Divisional Court found that the threshold for contempt of court had been met, it did not initiate proceedings. Either: 1) the barrister intentionally included fake citations in submissions to court, or 2) she did use generative AI when researching the cases and/or drafting the submissions meaning her witness statement denying the use of genAI was untruthful.  But there were questions over her training, she had been reported to the regulator by the Divisional Court, and she had been criticised in a public judgment.  The Divisional Court made clear though that its decision not to initiate contempt proceedings in respect of the barrister is not a precedent.  However, the Divisional Court did report the barrister to the regulator. 
  • The solicitor was reported to the Solicitors Regulation Authority (SRA). There were questions about how they responded when notified that there were non-existent citations, and also about checking that the barrister was suitable for the case.
  • The paralegal was found not to be at fault. She had referred all matters to her supervisor or to counsel, as shown by contemporaneous internal correspondence and attendance notes (for which privilege had been waived). 

Conclusion

This case is in many ways not an AI case but instead one about the conduct of legal professionals. As stated in the judgment:

The facts of these cases raise concerns about the competence and conduct of the individual lawyers who have been referred to this court. They raise broader areas of concern however as to the adequacy of the training, supervision and regulation of those who practice before the courts, and as to the practical steps taken by those with responsibilities in those areas to ensure that lawyers who conduct litigation understand and comply with their professional and ethical responsibilities and their duties to the court.

The issues raised do not simply relate to use of AI in legal research and drafting but instead more fundamentally about how legal professionals conduct themselves and comply with their obligations.  The court’s judgment in Ayinde is helpful as a very clear reminder of what they are, their importance, and the court’s powers.  

However, the risks will continue to evolve.  AI technologies and use cases continue to adapt.  Further, AI can exacerbate existing risks and create new ones; AI systems are often easy to access and use, and the output can appear at first blush realistic and true.  

As Lady Chief Justice said, we cannot be complacent and the legal profession needs to remain ever-vigilant and robust in its approach to the use of AI.  Further guidance from regulators should assist this.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Smith or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

With thanks to Stacie Bourton and Rachael Waring.

Related services

Related sectors