This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website
Thought Leadership

Artificial Intelligence regulation update for start-ups: UK and EU signals in early 2026

Passle image

Early 2026 has brought two clear messages for AI start-ups. In the EU, policymakers are trying to simplify parts of the AI Act. In the UK, the focus remains on sector-led regulation, sandboxing and growth infrastructure. 

For founders, product teams and investors, that matters because AI regulation is no longer just about restrictions. It is becoming part of how businesses evidence trust, secure market access and scale in regulated sectors.

The EU is looking to simplify AI Act implementation, not change direction

The most notable EU development is the proposed Digital Omnibus package published in November 2025. Its aim is not to replace the EU AI Act’s risk-based model, but to make implementation more workable. 

In practical terms, the proposal would ease some compliance burdens for SMEs and small mid-cap businesses, reduce certain documentation requirements and broaden the ability to use sensitive data for bias detection, subject to safeguards. It would also link the application of high-risk AI obligations more closely to the availability of standards, guidance and other support tools, with long-stop dates extending into 2027 and 2028. The proposals also point to a stronger role for the EU AI Office and a softer approach to AI literacy, with more emphasis on support from the Commission and Member States.

That is helpful, but it does not remove uncertainty. Businesses still need to plan for the possibility that elements of the original AI Act timetable could bite before the proposed simplifications are fully in place.

“Europe’s businesses, from factories to start-ups, will spend less time on administrative work and compliance and more time innovating and scaling-up.”   

For start-ups operating in or selling into the EU, the practical point is straightforward: the EU AI Act may not be burdensome but the only way to know, and to explain to others, is to check compliance. 

The UK is sticking with a sector-led model and adding more support

The UK has continued to resist a single cross-economy AI rulebook. Instead, existing regulators remain in the lead within their own remits, including the ICO, FCA, MHRA, CMA and Ofcom. 

That means the UK picture is still principles-based and decentralised. The familiar themes remain safety, transparency, fairness, accountability and contestability. What is becoming clearer, however, is that the UK wants regulators to support adoption as well as supervise risk. In practice, that support is expected to come through guidance, standards, assurance tools and regulator coordination, rather than a single new AI statute in the short term. One example is the ICO’s guidance on agentic AI. Post announcements about Anthropic's Mythos, cyber is also an area of focus. UK regulators are discussing the implications, and the UK Government and NCSC urge firms to treat AI systems as part of the attack surface, align with the AI Cyber Security Code of Practice, and tighten patch timing, access controls and monitoring.

The AI Opportunities Action Plan, published in January 2025 and followed by a one-year update in 2026, sits behind much of this. It is tied to investment in compute, datasets, skills and public sector adoption. It also sits alongside practical initiatives such as AI Growth Zones and the proposed AI Growth Lab, a cross-economy sandbox that would allow supervised, time-limited regulatory flexibility in areas where existing rules may slow deployment. 

For start-ups, the message is that UK policy is trying to reduce friction around testing and scaling, while still expecting firms to show that their systems are safe, explainable and properly governed. 

Healthcare and financial services show how this works in practice

The healthcare and financial services sectors are good examples of the UK approach.

In healthcare, the MHRA launched a call for evidence in December 2025 to inform the National Commission into the Regulation of AI in Healthcare. The focus is not only on whether current rules are adequate, but also on how safety should be monitored once AI tools are in use, how responsibilities should be shared across the supply chain and how regulation should respond as systems become more adaptive. For health-tech businesses, the existing medical devices framework remains the starting point, but expectations around post-market monitoring, accountability and real-world use are moving up the agenda. Programmes such as the MHRA's AI Airlock also suggest that supervised testing and early engagement will remain part of the regulatory picture. 

In financial services, the FCA has continued to say that it does not see a need for a new AI-specific rulebook at this stage. Instead, it is relying on existing frameworks such as Consumer Duty, SM&CR and operational resilience. At the same time, it is building practical support through its AI Lab, AI Sprint, Supercharged Sandbox and AI Live Testing. For earlier-stage firms, those programmes matter because they can offer access to testing environments, regulatory feedback and, in some cases, infrastructure that would be harder to assemble internally. Its January 2026 Mills Review also shows where longer-term attention is heading: agentic AI, consumer delegation, fraud, market concentration, dependency on third-party providers and explainability. 

Across both sectors, the pattern is similar. The regulator is not standing back, but it is also not rushing to legislate first and ask questions later.

What start-ups should do now

For AI start-ups, three points stand out. 

First, map the regulatory pathway early. In the UK, that means identifying which regulator or combination of regulators matters most. In the EU, it means understanding how the AI Act applies (or does not apply) to you. 

Second, build governance into the product, not around it. Explainability, testing, accountability, data quality and monitoring are not side issues for later diligence. They are increasingly part of procurement, investment and deployment decisions. 

Third, use the available engagement routes. In the UK especially, regulators are signalling that sandboxes, live testing and consultation processes are there to be used. Early engagement may help firms reduce uncertainty and shape how expectations develop. 

Why this matters

The broader trend is that AI regulation is becoming part of commercial strategy. In both the UK and the EU, the debate is moving beyond whether to regulate AI at all. The real question is what kind of regulatory environment will let trustworthy AI products reach the market more quickly. 

For start-ups, that means regulation should be read as both a risk issue and a route to credibility. 

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook or any other member in our Technology team.

This article was written by Nathan Gevao

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us