This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

AI Governance, Automated Decision Making and IP: Unpacking the Data (Use and Access) Act

Picture of Amanda Leiu
Passle image

Data (Use and Access) Act – Overview

After significant debate and various hurdles, the Data (Use and Access) Bill has finally been passed by the UK government, and received Royal Assent on 19 June 2025. It is now the Data (Use and Access) Act (“DUAA”). The DUAA represents a significant legislative effort to modernise the UK’s legal framework governing data use, access and regulation, particularly as AI systems become increasingly embedded in decision-making processes and content generation. 

The DUAA forms part of the government’s wider digital transformation agenda, aiming to streamline data access and use across sectors while ensuring appropriate legal and ethical safeguards are in place, particularly in areas such as health and social care (which we discuss further here), public services and online safety. It proposes several reforms in key areas including data sharing between public bodies, improved transparency and accountability in data processing, the establishment of a digital identity framework, and updates to oversight mechanisms.  

In this article, we focus on the key changes introduced by the DUAA and its implications for AI. For a broader overview of the DUAA, please see our article here.

Key data protection regime changes introduced by the DUAA which impact on AI include:

  • Obligations for organisations to disclose the use of tracking technologies, cookies, and data, along with restrictions on profiling for direct marketing;
  • Creation of a trust framework and register of providers to regulate digital verification services, ensuring the accuracy and reliability of personal information shared by public bodies; and
  • Reform of the Information Commissioner’s Office (“ICO”) governance structure, with the transfer of its functions to a new body - the Information Commission - as originally proposed under the previous Data Protection and Digital Information Bill.

The overarching goal of the DUAA is to create a more flexible, coherent, and innovation-friendly legal framework that supports the UK’s ambition to be a global leader in digital and data-driven technologies. Within this broader context, two areas stand out for their relevance to AI, which this article will focus on: the DUAA’s approach to regulating automated decision-making (“ADM”) and its proposals to protect copyrighted content from unauthorised use in AI model training and data scraping.

Progression of the DUAA and AI Governance 

Increased flexibility for ADM

The DUAA introduces updates to the legal framework surrounding ADM, reflecting the growing reliance on algorithmic systems in sectors such as finance, recruitment, healthcare, and public administration. The DUAA eases some of the constraints under Article 22 UK GDPR, which limit the use of solely automated decisions that produce legal or similarly significant effects.

Under the DUAA, organisations will potentially gain broader scope to deploy automated systems, provided they implement appropriate safeguards. 

In line with the UK’s principles-based approach to AI regulation, the DUAA revises Article 22, particularly for organisations processing non-special category personal data for ADM purposes. Notably, organisations would now be permitted to rely on ‘legitimate interests’ as a lawful basis for such processing, so long as the organisation, as data controller, ensures that suitable safeguards are in place. These include:

  • Providing individuals with detailed information about the use of ADM systems, including the decision-making logic and the data used, to enhance transparency;
  • Offering the right to request a human review, enabling decisions to be challenged and reviewed for fairness;
  • Conducting impact assessments to evaluate risks and benefits, with a focus on privacy, fairness, bias, and discrimination; and
  • Implementing accountability measures, such as regular audits and compliance checks, to ensure alignment with data protection obligations.

Importantly, more restrictive provisions will continue to apply to high-risk ADM involving special category personal data (such as health data, racial or ethnic origin, political beliefs, religious views), or biometric and genetic information used for unique identification. In such cases, decisions producing legal or similarly significant effects will still require one of the existing safeguards: explicit consent, necessity for contract performance, or a substantial public interest justification.

In this respect, the DUAA appears to offer increased flexibility to businesses while preserving critical protections for individuals. The UK’s Information Commissioner has expressed support for the proposed changes, describing them as striking “a good balance between facilitating the benefits of automation and maintaining additional protection for special category data.” The Information Commissioner’s Office (ICO) said it plans to issue a new AI and ADM code of practice within the next year.

Protection for copyright holders from unauthorised web-crawler and AI data collection

As generative AI models (capable of producing text, images, and code) continue to scale in complexity and commercial application, concerns have grown around the unauthorised scraping of online content and the use of copyrighted materials for AI model training. In response, various proposals were introduced during the DUAA’s progression through Parliament prior to its passing last Thursday, aiming to strengthen copyright protections in the context of AI. These included potential restrictions on the use of web-crawlers and clearer limitations on data mining for training purposes. The proposals reflected a broader consensus that copyright law must evolve to address the realities of generative AI, although questions remain over how such measures might impact access to data for research and innovation.

Supporters of the proposed provisions argued that they were necessary to uphold intellectual property rights and ensure fair use - particularly for creators in sectors such as publishing, music, and the visual arts. Critics, however, warned that overly rigid restrictions could inhibit AI development and disproportionately favour larger firms with the resources to licence extensive datasets.

Web-crawlers, which are automated programs that scan and index content across the web, can collect large volumes of data, including copyrighted works, often without the consent of rights holders. Several amendments were proposed during the House of Lords report stage to address the risks posed by such data gathering. 

These (ultimately rejected) clauses included:

  • A requirement for a government statement on how the Copyright, Designs and Patents Act 1998 applies to the activities of web crawlers and AI models that may infringe copyright. This aimed to clarify the scope of current law and provide a firmer legal basis for enforcement;
  • A requirement for a government plan to ensure transparency around the use of copyrighted materials in AI training and generative processes, giving rights holders greater visibility and potentially enabling compensation claims;
  • A requirement to develop a plan aimed at reducing barriers to entry for AI start-ups, particularly around access to datasets. The clause was designed to help address imbalances between large incumbents and smaller developers, in line with objectives outlined in the AI Opportunities Plan (which we discuss here); and 
  • A requirement for the development of a machine-readable digital watermarking standard, to identify licensed content and provide a clearer mechanism for rights management in the context of AI training and generation.

While acknowledging the rationale behind these proposals, the government contended that AI and copyright issues should be addressed through a separate, more comprehensive framework - supported by consultation, stakeholder engagement, and policy development. Chris Bryant, Minister of State at the Department of Science has further suggested that the government will separately legislate on AI in the next 18 months. Specifically, the government emphasised the need for a coordinated approach, including greater transparency in training data, mechanisms for rights-holder control, and industry-led solutions.

In December 2024, the government launched a consultation on copyright and AI to gather stakeholder input on these issues. Citing the need to avoid pre-empting the outcome of that process, the government opposed the proposed amendments, and they were ultimately removed from the DUAA following divisions in the House of Lords. Discussions are expected to continue in parallel with the consultation analysis, although it is unlikely that any substantive legislative changes will be introduced until that process is complete.

Current Status and Outlook

The DUAA has finally been passed and is expected to receive Royal Asset today (19 June 2025). 

Some provisions of the DUAA will become effective immediately and others are expected to be rolled out in phases and will be brought in by secondary legislation. Most of the core provisions, such as changes to lawful bases for processing, ADM and data subject rights are anticipated to come into force fairly quickly. The government has indicated that further legislation, particularly around AI and copyright, will follow within the next 12 -18 months. 

Meanwhile, the UK’s adequacy status with the EU remains under review for transfers of personal data, with a decision likely deferred to late 2025 to allow for assessment of the finalised framework.

Conclusion - Impacts on AI Governance in the UK

The DUAA represents a significant shift in the UK’s approach to data governance, introducing reforms that support responsible innovation while updating oversight for an AI-driven future. While not explicitly framed as AI legislation, the DUAA includes measures with substantial implications for AI governance. Proposals made during its progression highlight the UK’s preference for a flexible, sector-led framework - balancing innovation with accountability and rights protection. This contrasts with the EU’s more centralised approach under the AI Act, where regulation is consolidated within a single framework.

If you have any questions or would otherwise like to discuss how current or future regulations impact what you do with AI, please contact Martin Cook, Tom Whittaker, Madelin Sinclair McAusland, Amanda Leiu and Liz Smith or any other member in our Commercial and Technology team. 

This article was written by Yadhavi Analinkumar and Victoria McCarron.