25 July 2022

Artificial intelligence (AI) systems for medical and healthcare purposes received the most investment between 2017 and 2021 according to the Stanford AI index report 2022.[1]

This is understandable – technology is improving whilst becoming cheaper, and AI promises improved results for patients and competitive advantages to companies.

However, AI systems also pose risks to the health, safety and fundamental rights of people, and legal and commercial risk to companies.

In order to encourage innovation and address AI risks there is a growing body globally of proposed and enacted regulation specific to AI. One of which is the EU’s proposal for a cross-sector regulation of AI, known as the AI Act (“the Act”).

The Act is currently in draft and going through the EU’s legislative process. It will be amended before it is enforced and that may not be for a number of years. But we believe that it is a case of when, not if, the Act is passed.

What is clear now is that the Act is intended to directly affect HealthTech companies whose AI systems placed on the EU market and are subject to third party conformity assessments under the Medical Devices Regulation (“MDR”) and in vitro diagnostic medical devices regulation (“IVDR”).

Think AI-enabled diagnostic tools, therapeutic devices or implantable devices like pacemakers.

However, the Act will also affect HealthTech companies whose AI systems are not caught by the Act directly. For example, think of AI-enabled GP apps, patient chatbots and fall detection systems.[2]

Their AI can go wrong and cause harm to patients by delaying or suggesting inappropriate treatment. Whilst they may not be classed as high risk under the Act and be subject to the Act’s obligations, the Act will shape industry and customer expectations of how the risks of such AI Systems are managed.

Here we provide a brief overview of the Act and its relevance to HealthTech.

Who does the Act affect?

The Act will apply to (with a few exceptions):

1. providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established within the EU or in a third country;

2. users of AI systems located within the EU;

3. providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.

It is important to note that companies inside and outside of the EU producing and providing HealthTech AI into the EU market will be subject to the Act.

Which HealthTech will be subject to the Act? How?

Two questions to ask are: is your system ‘Artificial Intelligence’ and, if so; does it fall within one of the risk-categories that means the AI system will be subject to specific obligations and restrictions?

A broad definition of an AI system

There is no globally accepted definition of AI. Technological progress makes definition difficult. The Act seeks to ‘future proof’ its application by defining an AI system broadly and including a continually updated list of AI systems.

The current definition is any ‘software that is developed with one or more of the techniques and approaches listed in Annex I [including logic, knowledge-based and statistical approaches] and can, for a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing the environments they interact with’. Exactly what that means in practice will not always be clear.

The final definition of AI is likely to change. Different EU Parliamentary committees have proposed amendments to the definition depending on what they believe does (or does not) warrant intervention by the Act.

For example, the Committee on Legal Affairs and the Committee on Industry, Research and Energy, seeks to replace the word “software” with the broader “machine-based system”.

The benefits of a ‘future proof’ definition mean that determining exactly what is or is not an AI system – and therefore subject to the Act – will be subject to some uncertainty.

What is the risk of the AI system?

So if a HealthTech system is classed as AI, the next question is which risk category it falls into.

The Act uses a three-tiered risk framework to classify AI systems into: 1) unacceptable risk (prohibited AI systems); 2) high risk; and 3) low or minimal risk. Depending on the specific classification, different obligations are imposed.

High Risk HealthTech will include AI systems:

  • which are a product, or a safety component of a product, which: 1) has to have third-party conformity assessment before being placed on the market[3]; or 2) are subject to specific EU legislation, including MDR and IVDR;[4] or
  • intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.

Not all HealthTech will be High Risk under the Act but there remains a risk that certain types of HealthTech AI systems are added to the list of High Risk AI systems subject to the Act.

For example, the European Parliament committee on the internal market has proposed adding “systems used for emergency healthcare patient triage” and “systems used in making decisions on the eligibility for health and life insurance” to the High Risk category.

What are the obligations on High Risk AI systems under the Act?

HealthTech companies should be aware that the Act will place on High Risk AI systems potentially significant obligations which have to be followed.

In any event, even those companies whose AI systems are not categorised as High Risk may want to incorporate the Act’s obligations into their AI systems because that is what their industry and customers will expect as correct practice.

Systems classified as High Risk will have specific obligations[5] in relation to:

  • reporting requirements to consumers;
  • transparency to users;
  • data protection and governance;
  • technical documentation;
  • record keeping;
  • risk management;
  • human oversight; and
  • robustness, accuracy and security.

The Act will sit alongside other industry-specific legislation and compliance with all relevant requirements will be expected. This is particularly the case for High-Risk HealthTech AI systems which also need to comply with the MDR and IVDR.

The safety risks specific to AI systems are meant to be covered by the Act whilst the overall safety of the product, and how the AI system is integrated, is addressed by the conformity assessment under the MDR or IVDR.

The Act is intended to “be integrated into the existing sectoral safety legislation [including the MDR and IVDR] to ensure consistency, avoid duplications and minimise additional burdens”.

Whether consistency is achieved remains to be seen.

We anticipate a degree of further sector-specific discussion as the Act progresses to ensure that different legislative regimes work together harmoniously.

Penalties for breaching the Act

The Act will be enforced by a designated regulator within each Member State and at EU level by a newly established European Artificial Intelligence Board.

The Act proposes substantial fines for non-compliance along with the ultimate right to recall the AI system:

Breach under the Act Potential fine under the Act
A breach to the prohibition on unacceptable risk AI systems, or infringing data governance provisions in relation to high-risk AI systems. Up to the higher of EUR 30 million or, if the infringer is a company, 6% of the total worldwide annual turnover.
Non-compliance of AI systems with any other requirement under the Act. Up to the higher of EUR 20 million or 4% of the total worldwide annual turnover.
Supplying incorrect, incomplete, or misleading information to notified bodies and national authorities. Up to the higher of EUR 10 million or 2% of the total worldwide annual turnover.

Conclusion

The Act will come into force, albeit perhaps not for many years. Once in force, there will be a two-year transition period in which HealthTech companies can prepare.

However, HealthTech companies should pay attention now. The EU has set a clear direction of travel that will affect what HealthTech companies can and cannot do, regardless of whether they operate inside or outside the EU.

The Act is likely to set, or at least influence, global standards for regulating AI, just as the GDPR did for data protection.

Early engagement with the Act’s objectives and content will give HealthTech companies guidance on the future shape of their AI systems and their industry.

This article was written by Tom Whittaker and Annalise Slocock. 

If you have any questions relating to this HealthTech and AI, or the wider healthcare sector, please contact Tom Whittaker or Head of Healthcare, Patrick Parkin.

 

[1] https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf figure 4.2.11

[2] Assuming they are not subject to the MDR or IVDR

[3] But if an AI System is designated as High Risk under the AI Act it does not mean that it will be under the medical device regulations or in vitro device regulations.

[4]Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1;

Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).

[5] But the relevant authority may allow an exemption.

Key contact

Patrick Parkin

Patrick Parkin Partner

  • Healthcare
  • Procurement and State Aid
  • Commercial

Subscribe to news and insight

Burges Salmon careers

We work hard to make sure Burges Salmon is a great place to work.
Find out more