ICO Tech Futures report: Agentic AI – Key Takeaways for Organisations
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
Gartner forecasts that by 2026, around 40% of enterprise applications will incorporate AI agents – a steep rise from under 5% in 2023 – illustrating how quickly agentic AI is becoming embedded in enterprise software. Last month, the UK Information Commissioner’s Office (ICO) published its “Tech Futures: Agentic AI” report, which explores the emerging landscape of agentic artificial intelligence (AI), highlighting both the opportunities and the data protection challenges that organisations should consider as they develop or deploy these technologies.
Below, we summarise the key takeaways from the ICO’s report, with added commentary on practical implications for organisations considering or already using agentic AI systems.
What is Agentic AI?
Agentic AI refers to systems that can autonomously pursue goals, adapt to new situations, and exhibit reasoning-like capacities. These systems combine generative AI (such as large language models like GPT, Gemini or Claude) with tools that enable contextual understanding and automation of open-ended tasks.
Unlike traditional agents, these systems leverage large language models to iterate and reason, enabling them to complete a wide range of tasks with minimal human intervention. This autonomy introduces both efficiency gains and governance risk that organisations must manage proactively.
Key Takeaways
ICO’s Position and Regulatory Direction
The ICO recognises the rapid development and adoption of agentic AI across various use cases and sectors including commerce, government services and medicine and notes that the novel nature of agentic AI, combined with its fast rollout, makes it difficult to predict future capabilities and calls for flexible governance frameworks.
While the ICO’s report does not constitute formal guidance, it provides a clear indication of the regulator’s early thinking and priority areas. It highlights accountability, transparency, and governance as likely focal points for compliance, and outlines several data protection risks associated with the deployment of agentic AI, as well as some data protection opportunities it may present. For organisations, this is a strong signal to start aligning internal policies and risk assessments now, rather than waiting for prescriptive rules.
The ICO is also actively engaging with industry stakeholders and international regulators to shape the next phase of guidance and codes of practice. Businesses should monitor these discussions closely and consider participating in consultations to influence outcomes and ensure they remain ahead of evolving regulatory requirements.
Data Protection Risks and Organisational Responsibilities
The ICO emphasises that agentic AI systems, regardless of their autonomy, do not have legal personality. Accordingly, organisations remain responsible for compliance under GDPR and related laws. This means that even where AI agents act independently, the organisation deploying them must ensure that governance, oversight and accountability mechanisms are in place.
Key risks identified include:
Determining controller/processor roles in complex supply chains
Agentic AI often relies on multiple integrated tools and APIs, creating layered processing chains. Therefore, organisations should consider and clearly define who acts as controller and who acts as processor at each stage, as failure to do so could lead to gaps in accountability and enforcement risk.
Automated decision-making with limited human oversight
Agents capable of iterative reasoning may make decisions that affect individuals with minimal or no human intervention. This raises compliance issues under Article 22 UK GDPR and fairness principles. Organisations should ensure meaningful human oversight for such decisions, documenting this oversight in relevant DPIAs.
Broad or unclear purposes for data processing
AI agents are versatile and therefore carry a heightened risk of processing data for broader purposes than intended once they are deployed. If the purposes of agents’ use are not clearly set out, organisations run the risk of breaching transparency, purpose limitation and data minimisation obligations.
Increased complexity in transparency and individual rights
The rapidly changing nature of agentic AI makes it challenging for organisations who deploy AI agents to meet data transparency and explainability obligations. The ICO notes that there are already cases of AI agents acting in ways that an organisation did not foresee when they were deployed, making it difficult for those organisations to fully understand how and where relevant information is processed.
Practical Steps for Compliance
Organisations should:
Define clear purposes and task boundaries
Organisations should set specific, limited purposes for each agentic AI deployment to avoid agents overstepping their intended purpose and ensure these are clearly communicated to users.
Implement data minimisation controls
If possible, data accessible to AI agents should be restricted to what is necessary for their tasks. This may include configuring tool access to avoid unintended data collection.
Ensure transparency and explainability of agentic AI actions
Clear and up-to-date information should be provided on what your organisation’s AI agents do, the data they access and how data subjects can exercise their rights. In this regard, agent activity logs may be helpful. Organisations should ensure that agents’ actions can be continuously monitored and stopped if issues arise.
Conduct Data Protection Impact Assessments (DPIAs) for high-risk agentic AI deployments
It is very likely that the deployment of AI agents will result in a high risk to people’s information. Therefore, organisations should carry out DPIAs for AI agent use. DPIAs should consider the risks, controls and impacts of using agentic AI before agents are deployed and should be updated regularly.
Prepare for evolving roles of Data Protection Officers (DPOs)
The ICO notes that agentic AI increases the complexity of oversight, documentation and rights management, which may reshape the responsibilities and skill‑set expected of DPOs. Organisations should ensure their DPOs are adequately supported to understand agentic AI workflows to mitigate risk arising from the use of agents. Additionally, the ICO notes organisations may consider exploring how AI‑enabled “DPO agents” could assist with tasks such as monitoring, logging and rights‑request handling while ensuring that ultimate accountability remains with the human DPO.
Identified Innovation Opportunities
The ICO is clear that it encourages privacy-by-design innovation with respect to agentic AI, including:
Future Scenarios
The ICO outlines four possible future scenarios with respect to agentic AI development, ranging from limited adoption to widespread, high-capability agentic AI. Each scenario carries different regulatory and governance implications: lower‑capability agents may pose fewer risks but offer limited benefits, while high‑capability, widely‑deployed agents demand far more sophisticated oversight, auditability and role‑allocation controls. The ICO’s use of scenario planning highlights that organisations should prepare governance frameworks that can evolve as agentic AI systems become more capable and more deeply embedded across operations.
Conclusion
The ICO’s report signals a proactive regulatory approach to the growing sphere of agentic AI, emphasising the need for organisations to balance innovation with robust data protection measures. Organisations developing or deploying agentic AI should review their governance frameworks, ensure compliance with UK GDPR principles and stay engaged with evolving ICO guidance to ensure that they are best placed to deploy agentic AI responsibly and efficiently.
For queries or advice on the content of this article, please contact Hamish Corner, Tom Whittaker, Amanda Leiu or a member of Burges Salmon's Commercial & Technology team.
This article was written by Ruadhán Ó Gráda and Amanda Leiu.
Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.
Update your preferred sourcesBe sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.
Follow us