This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website
Thought Leadership

EU AI Act – EU council agrees approach to streamline AI rules

Picture of Tom Whittaker
Passle image

The EU Council, which represents Member State governments, has published proposals to streamline AI rules as part of wider rules to simplify the EU's digital legislative framework (here).  

Readers should remember that the AI Act is in force but that different sections come into effect at different times; for example, AI literacy requirements already apply, whilst obligations on high-risk AI systems apply from 2 August 2026. Proposed changes include those to parts which already apply, so readers will want to consider whether proposals affect what they are doing to comply with the AI Act.  Further, the AI Act applies to those inside the EU and also outside the EU, either directly where they place AI systems or output into the EU or indirectly via their supply chain. 

Here we summarise key proposals and next steps.

Proposed amendments

For a flowchart on how to navigate the EU AI Act, visit our practical guide here.  Proposed amendments include:

  • to adjust the timeline for applying rules on high-risk AI systems by up to 16 months, so that the rules start to apply once the Commission confirms the needed standards and tools are available.  The text also introduces a fixed timeline for the delayed application of high-risk rules: the new application dates would be 2 December 2027 for stand-alone high-risk AI systems and 2 August 2028 for high-risk AI systems embedded in products.
  • adds a new obligation for the Commission to provide guidance to assist economic operators of high-risk AI systems covered by sectoral harmonisation legislation in complying with the high-risk requirements of the AI act in a manner that minimises compliance burden.
  • targeted amendments to the AI Act that would extend certain regulatory exemptions granted to SMEs also to small mid-caps (SMCs), reduce requirements in a very limited number of cases, extend the possibility to process sensitive personal data for bias detection and mitigation,
  • reinforce the AI Office’s powers and reduce governance fragmentation,
  • prohibit AI practices regarding the generation of non-consensual sexual and intimate content or child sexual abuse material.
  • reinstates the obligation for providers to register AI systems in the EU database for high-risk systems, where they consider their systems to be exempted from classification as high-risk.
  • postpones the deadline for the establishment of AI regulatory sandboxes by competent authorities at national level until 2 December 2027.
  • clarifies the competences of the AI Office for the supervision of AI systems based on general-purpose AI models where the model and that system are developed by the same provider by listing exceptions where national authorities remain competent, including law enforcement, border management, judicial authorities and financial institutions.

Next steps

Now that the Council has published its propoals, the European Parliament will report on its position and then trilogue (Council, Parliament, Commission) discussions will take place. The timetable is for a vote in June and publication of amendments in July 2026 (see further details here The AI Act Omnibus: Timeline, Key Players, and Documents (March Update)).

These proposals should be seen in the wider context . The Commission has put forward ten ‘Omnibus’ packages aiming to simplify existing legislation on sustainability, investment, agriculture, small mid-caps, digitalisation and common specifications, defence readiness, chemical products, digital issues including on AI, environment, the automotive sector and food and feed safety. 

However, the deadline driving the potential changes for AI is 2 August 2026, when the current obligations for high-risk AI systems under the AI Act are currently due to come into force.

The proposals recognise both the delays to publishing standards and frameworks required for implementing parts of the AI Act, and also that stakeholder feedback identified the need for clarification on how the Act will apply in practice. However, the proposals also reflect that the EU continues to strike a balance between innovation and protecting health, safety, and fundamental rights of EU citizens. 

Consequently, it is unclear whether the current proposals reflect the direction of travel, or there will be further significant changes, or indeed whether the timeline will be met at all.  However, that would result in the position that obligations for high-risk AI systems appear to apply even though there is intent to delay the application start date. So we can at least expect clarification about how and when obligations for high-risk AI systems apply.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Griffiths or any other member in our Technology team.  For the latest on AI law and regulation, see our blog and newsletter.

Related services

Related sectors

See more from Burges Salmon

Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.

Update your preferred sources

Follow us on LinkedIn

Be sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.

Follow us