EU AI Act – EU council agrees approach to streamline AI rules
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The EU Council, which represents Member State governments, has published proposals to streamline AI rules as part of wider rules to simplify the EU's digital legislative framework (here).
Readers should remember that the AI Act is in force but that different sections come into effect at different times; for example, AI literacy requirements already apply, whilst obligations on high-risk AI systems apply from 2 August 2026. Proposed changes include those to parts which already apply, so readers will want to consider whether proposals affect what they are doing to comply with the AI Act. Further, the AI Act applies to those inside the EU and also outside the EU, either directly where they place AI systems or output into the EU or indirectly via their supply chain.
Here we summarise key proposals and next steps.
For a flowchart on how to navigate the EU AI Act, visit our practical guide here. Proposed amendments include:
Now that the Council has published its propoals, the European Parliament will report on its position and then trilogue (Council, Parliament, Commission) discussions will take place. The timetable is for a vote in June and publication of amendments in July 2026 (see further details here The AI Act Omnibus: Timeline, Key Players, and Documents (March Update)).
These proposals should be seen in the wider context . The Commission has put forward ten ‘Omnibus’ packages aiming to simplify existing legislation on sustainability, investment, agriculture, small mid-caps, digitalisation and common specifications, defence readiness, chemical products, digital issues including on AI, environment, the automotive sector and food and feed safety.
However, the deadline driving the potential changes for AI is 2 August 2026, when the current obligations for high-risk AI systems under the AI Act are currently due to come into force.
The proposals recognise both the delays to publishing standards and frameworks required for implementing parts of the AI Act, and also that stakeholder feedback identified the need for clarification on how the Act will apply in practice. However, the proposals also reflect that the EU continues to strike a balance between innovation and protecting health, safety, and fundamental rights of EU citizens.
Consequently, it is unclear whether the current proposals reflect the direction of travel, or there will be further significant changes, or indeed whether the timeline will be met at all. However, that would result in the position that obligations for high-risk AI systems appear to apply even though there is intent to delay the application start date. So we can at least expect clarification about how and when obligations for high-risk AI systems apply.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Griffiths or any other member in our Technology team. For the latest on AI law and regulation, see our blog and newsletter.
Want more Burges Salmon content? Add us as a preferred source on Google to your favourites list for content and news you can trust.
Update your preferred sourcesBe sure to follow us on LinkedIn and stay up to date with all the latest from Burges Salmon.
Follow us