EU Artificial Intelligence Act – one year on and further proposed amendments from the Committees on the Internal Market and Civil Liberties

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
Nearly one year after the European Commission published the draft Regulation of Artificial Intelligence ("the AI Act"), further amendments have been proposed in the draft report of the Committees on the Internal Market and Consumer Protection (IMCO), and on Civil Liberties, Justice and Home Affairs (LIBE). This follows reports of the EU Committees (on the Regions (here), on Culture and Education (here)) and an opinion of the European Central Bank (here) proposing amendments to the AI Act. These proposed amendments are of interest to those who want an insight into how the AI Act may change.
This article looks at the following proposed amendments by IMCO and LIBE:
Proposed additions to the AI Act are included in bold and italicised whilst wording proposed to be deleted appears underlined e.g. [Proposed deletion:...].
How AI is defined is important. The AI Act recognises that "The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments".
There is no universally accepted definition of AI. However, definitions often refer to an AI system which is designed to achieve specified objectives. Those objectives may be set by a human. For example, the person using the AI system (the 'deployer' in the AI Act) may specify that they want the AI system to generate a prediction based on a dataset - think of an AI system predicting whether a borrower will default on their overdraft based on their financial history. The current draft of the AI Act defines AI by reference to human-defined objectives, as does the US National Artificial Intelligence Act of 2000.
However, what if an AI System's objectives were set by AI, such as another AI system? It may be more difficult to envisage how such an AI System would operate and the risks it poses. But should it fall outside of the prohibitions and obligations of the AI Act? Other definitions of AI do not refer to who makes the objectives. For example, the OECD's definition of AI simply refers to 'a given set of objectives'.
The IMCO and LIBE Committees propose to remove reference to 'human-defined' objectives but do not explain why in the draft report.
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with...'
The AI Act permits High-Risk AI Systems subject to specific requirements. One of those is that a "risk management system shall be established, implemented, documented and maintained".
The AI Act specifies the steps required of the risk management system including: identification of known and foreseeable risks; evaluating risks of reasonably foreseeable misuse; and adoption of suitable risk management measures.
However, who is at risk from the High-Risk AI System that the AI Act is trying to protect? The IMCO and LIBE Committees propose to clarify this which, in turn, gives further clarity over what a compliant risk management system would include.
(a) identification and analysis of the known and foreseeable risks associated with each and the reasonably foreseeable risks associated with each that the high-risk AI system can pose to:
(i) the health or safety of natural persons;
(ii) the legal rights or legal status of natural persons;
(iii) the fundamental rights of natural persons;
(iv) the equal access to services and opportunities of natural persons;
(v) the Union values enshrined in Article 2 TEU [Which is: The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail].
The risk management systems for High-Risk AI Systems appears to be in addition to other obligations upon the credit institution developers or users. The AI Act requires quality management systems and monitoring for High-Risk AI Systems but that such obligations are deemed to be fulfilled by credit institutions which comply with Directive 2013/36/EU. However, no similar deemed fulfilment is in place for risk management systems for High-Risk AI Systems.
The European Central Bank welcomed the AI Act trying to avoid overlap with existing legislative frameworks for credit institutions (when the ECB published its opinion on the AI Act as we wrote about here). It's unclear whether potentially additional risk management systems for credit institutions' high-risk AI systems is intended overlap or not.
The AI Act seeks a publicly accessible EU database of high-risk AI Systems. Those AI Systems must be registered before being placed on the market or putting into service by the provider. A provider means a 'natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge'.
A provider is different to a 'user' who is 'any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity'. So the current AI Act means that public bodies who used a High-Risk AI system provided by a third-party would not have to separately register on the EU database.
The IMCO and LIBE Committees propose a greater degree of transparency - public bodies and EU institutions using High-Risk AI Systems should register that use on the EU database. This reflects an ongoing discussion about what good governance looks like for AI Systems used by public bodies (we wrote here about the UK's proposed algorithmic transparency standard for public bodies, and here about what algorithmic transparency in the public sector looks like).
Before putting into service or using a high-risk AI system in accordance with Article 6(2), users who are public authorities or Union institutions, bodies, offices or agencies or users acting on their behalf shall register in the EU database referred to in Article 60.
The AI Act specifies certain types of AI Systems as High-Risk. IMCO and LIBE Committees propose that the following are added:
The AI Act envisages that Member States will designate a national supervisory authority to enforce the AI Act. Penalties can be sizeable; use of prohibited AI can result in fines of up to €30m or, for companies, up to 6% of worldwide annual turnover for the preceding financial year, whichever is higher.
However, recognising the potential for infringements taking place across multiple Member States or the possibility of national authorities not bringing enforcement proceedings, the IMCO and LIBE Committees propose that the Commission can enforce the AI Act (in summary):
If you would like to discuss the potential of the AI Act, please contact Tom Whittaker or Martin Cook.
The co-rapporteurs want to emphasize, together, that the goal of the AI Act is to ensure both the protection of health, safety, fundamental rights, and Union values and, at the same time, the uptake of AI throughout the Union, a more integrated digital single market, and a legislative environment suited for entrepreneurship and innovation. This spirit has guided and will continue to guide their work on this Regulation.
https://eur-lex.europa.eu/legal-content/EN/HIS/?uri=CELEX%3a52021PC0206