This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website
Thought Leadership

The Data (Use and Access) Act 2025: Navigating the New Rules around Automated Decision- Making (Part 2)

Picture of Amanda Leiu
Passle image

Building on our earlier updates on the Data (Use and Access) Act 2025 (“DUAA”) (see here and here), this article forms Part 2 of an ongoing series from Burges Salmon’s Commercial & Technology team as the DUAA’s phased provisions come into force. See Part 1 for the key implications of DUAA on international transfers.

Below, we cover the evolving framework for automated decision-making (“ADM”) and provide practical guidance on how businesses should adapt to meet these requirements.

What is ADM and what were the previous ADM rules?

ADM refers to decisions made about an individual solely by automated means without meaningful human involvement, which produces legal effects or have similarly significant effects. In practice, many decisions made by automated means involve an element of human intervention in the process and so would fall outside of the ADM rules. 

To qualify as ADM, the processing must have a significant effect on the individual – such as, decisions affecting their financial status, access to essential services, or employment position. Examples of such decisions can include an online bank’s automatic refusal of an online loan application or certain uses of screening software in recruitment processes.

Prior to the DUAA, ADM was generally prohibited under Article 22 of the UK GDPR, unless one of the following conditions were met:

  • the processing was necessary for a contract between a data controller and an individual;
  • the processing was authorised by domestic law, subject to safeguards being in place; or
  • explicit consent was obtained from the data subject.

Where special category data is processed for ADM, organisations need either explicit consent, or the processing is necessary for reasons of substantial public interest. 

New ADM rules in force

Reflecting the increasing use of AI-driven decision-making across a range of sectors and a continuing interest from policymakers in facilitating the development of emerging AI technologies, the DUAA introduces a more permissive regime for ADM.

Following secondary legislation that took effect on 5 February 2026, the DUAA largely removes the general prohibition on ADM. Organisations can now rely on any of the existing lawful bases under UK GDPR, including legitimate interests, provided mandatory safeguards are applied. 

The mandatory safeguards include:

  • informing individuals about the use of ADM in relation to them;
  • providing individuals with an opportunity to make representations about automated decisions;
  • offering meaningful human intervention (i.e. keeping a human in the loop); and
  • giving individuals the opportunity to contest an automated decision.

ADM involving special category data (e.g. health data, biometric data or data relating to religious beliefs) remains restricted and can only be carried out in narrow circumstances (see above). 

In addition, the DUAA gives the Secretary of State power to make further regulations in relation to what constitutes ‘meaningful human involvement’ in decisions and a ‘significant effect’ on individuals (both core elements of the definition of ADM), and to make changes to the mandatory safeguards. 

Key considerations

Meaningful human involvement

For a decision to be considered ADM, the decision must be made without “meaningful human involvement”. The issue of what constitutes meaningful in the context of current algorithmic systems is likely to be an important one. Previous ICO guidance has noted that for human involvement to be meaningful, it “should come after the automated decision has taken place” and must relate to the actual outcome. However, many modern large-scale algorithmic decision-making systems (e.g. content moderation on online platforms or credit scoring for the purposes of providing loans) operate on an automated triage basis, with humans reviewing exceptions only. Whether systems like these qualify as ADM under the DUAA will turn on whether any human intervention is genuinely capable of altering the outcome of a significant decision, rather than the volume or triage structure of the system.

Separately, the increasing complexity of AI systems is likely to make human involvement an increasingly specialised endeavour. The increasing use of autonomous AI agents, for example, is likely to make it difficult for human reviewers to independently oversee an entire automated decision-making process and evidence meaningful intervention under the DUAA’s statutory test.

Relatedly, an established body of research indicates that human reviewers of automated decisions frequently ‘rubber stamp’ algorithmic output, resulting in ‘automation bias’. Arguably, such factors could impact the meaningfulness of a human review. The ICO is expected to publish updated guidance on ADM imminently (scheduled for Spring 2026), which may provide clarity on some of these points.

As a general note, the Secretary of State’s new power to change (by regulation) what counts as “meaningful human involvement” indicates that the ADM rules are likely to be more fluid than they were prior to the DUAA.

Human in the loop

Businesses whose systems fall under the definition of ADM will be required to implement the mandatory safeguards mentioned above. These safeguards represent a significant obligation for businesses and will require robust systems to be in place to ensure that automated decisions can be monitored, explained and contested. Relevant businesses can expect to be tested on their AI literacy and understanding of automated decision-making processes. The requirement to enable individuals to obtain human intervention in relation to automated decisions is likely to raise questions over the extent to which a human must be ‘in the loop’, though emerging best practice would suggest that offering human intervention would require all human reviewers to understand model limitations and bias, risk factors, and when escalation is needed.

Transparency and the “black box” problem 

One of the DUAA’s mandatory safeguards includes the requirement to inform individuals about the use of ADM in relation to them. It is unclear whether this requirement simply reinforces the existing transparency requirements under the UK GDPR, or whether it operates as a separate obligation under the new ADM rules.

Under the UK GDPR, organisations must already inform individuals about the existence of ADM, including profiling, and provide “meaningful information” about the logic involved, including its significance and potential consequences. However, for many businesses, this presents a significant technical hurdle as modern AI systems often operate as “black boxes” making it difficult to explain how specific outputs are generated.

Moreover, the nature and extent of the right of an individual to receive 'meaningful information' about the ADM remains a grey area for many organisations. For example, how much detail and information an organisation needs to be provided? How granular should the information be for it be meaningful i.e. does it need to be specific to individual? As noted above, we are expecting updated guidance from the ICO on ADM imminently, which may provide clarity on some of these points. 

Practical takeaways

The DUAA now means diverging regulatory approaches to ADM in the UK and EU. Businesses operating in both the UK and EU (as many will) should assess whether they retain the more prohibitive position pre-DUAA (to align with the EU) or implement a separate approach for UK.

Businesses conducting or adopting ADM should consider the following key steps now to ensure alignment with the new rules: 

  • Monitor forthcoming ICO guidance on ADM (expected in Spring 2026).
  • Map and audit any existing ADM (and planned ADM). Document the types of data involved (whether this includes special category data), lawful basis, and whether any meaningful human involvement is present.
  • Ensure that the mandatory safeguards are implemented where conducting ADM.
  • Review and update privacy notices to explain where ADM is used, the logic involved, what safeguards apply, and how individuals can seek human intervention or challenge an outcome.
  • Update DPIAs to reflect the new ADM rules, and the ongoing restriction on ADM where special category data is involved. Additionally, DPIAs should address how the DUAA’s ADM safeguards are satisfied where applicable.
  • Update their internal training and governance so that reviewers of automated decisions understand AI model limitations and the circumstances in which they must escalate or override an outcome.

For queries or advice on the content of this article, please contact Hamish Corner, Lucy PeglerAmanda Leiu or a member of Burges Salmon's Commercial & Technology team. 

This article was written by Ruadhán Ó Gráda and Amanda Leiu.

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us