The EU’s Artificial Intelligence (AI) Act is many important things – the world’s first attempt at a legal framework for AI, for one – but it is also an excellent subject for a medical device quiz. Try this one: when is a medium-risk medical device not a mediumrisk medical device? The answer: when it uses AI. The act came into force in August 2024 but is being rolled out in a phased fashion. The result is that it is increasingly looming on the radar of the sector, not least because it brings with it those quiz-worthy changes to the status of certain Class IIa and IIb medical devices.

Such devices have traditionally been considered medium-risk and regulated accordingly. Under the new legislation, however, they automatically become viewed as high risk if they have any element of AI.

And the act’s definition of AI is wide-ranging. An “AI system” means, it says, “a machine-based system that is designed to operate with varying degrees of autonomy and that may exhibit adaptiveness after deployment and that… infers… how to generate outputs”.

That “varying degrees of autonomy” part could potentially place relatively straightforward applications of automation under the act. In addition, the legislation covers both devices in which AI is being used as a safety component and those in which the AI is in itself the product. The upshot is that many medical devices may fall under the scope of the act, thus requiring its manufacturers to comply with a new regulatory framework. An infusion pump that alters dose based on pre-determined boundaries related to vital signs could be seen as falling under the act’s definition, just as much as software that ‘learns’ how to identify potential abnormalities in diagnostic scans.

AI raises the risk profile of medical devices

According to Alison Dennis, international co-head of life sciences and healthcare at law firm Taylor Wessing, the AI regulatory framework created by the act is a demanding one. Indeed, she believes many companies will find compliance with it to be “hard work”. However, she has good news for anyone working in an industry for which the act provides a handy quiz question.

“I think medical devices companies are not going to find it anywhere near as difficult as anyone else, because the fundamental base on which the AI act is written is so similar to the medical device regulations,” she says.

“It’s the same thinking; the same process of a notified body, of a quality management system and of technical documentation.”

So, while medical device companies with devices that fall under the act will now have two separate regulatory processes to go through – one for the device and one for the AI – there will in effect be overlap. “The guidance suggests that you have a quality management system for the medical device and then for the AI, but the process for the two is virtually the same.”

While the practicalities may remain similar, cybersecurity expert Beau Woods says there is a logic behind emphasising the potential additional risks of AI. “Anytime you add new capabilities you open up exposure to additional accidents and adversaries to those that existed before you added that thing.

“The more components, the more moving parts there are to something, the more room there is for failure modes. That’s just intrinsic. It’s not a feature of AI in itself, any more than it is of software. It is just an inherent property of complex systems.”

Woods is cybersecurity advocate with I Am The Calvary, a grassroots organisation focused on the intersection of digital security, public safety and human life. Its area of concern, says founder Joshua Corman, “is where bits and bytes meet flesh and blood”. Medical devices are thus right at the heart of its areas for action.

In 2015, the organisation helped successfully push for the world’s first recall of a medical device purely for cybersecurity reasons, having identified that an infusion pump could be hacked and doses adjusted. With AI, risks could be exacerbated. “I’ve seen a lot of AI-type applications in diagnostics, so machine learning algorithms being used for radiology and imaging data,” says Woods.

“But as with any software there are ways to attack to make them less reliable – going in and tampering with things like the weighting in values, perhaps.

“There this a constant back and forth of how we create something that works in an idealised environment and then how do we ensure that it does not fail in a hostile environment.”

For Corman, the key concern is that “dependence on connected technologies in these areas is growing a lot faster than our ability to secure those technologies”.

Regulatory and cybersecurity challenges

With AI, the further challenge is that there is built-in evolution of the system over time. As Taylor Wessing’s Dennis puts it: “Change is built in with ‘proper’ AI because it is thinking and learning from the data that you’re putting into it.”

So, at which point should a change to an AI-supported product require reregulation? This is a question being considered by bodies worldwide. The US Food and Drug Administration (FDA), Health Canada, and the UK Medicines and Healthcare products Regulatory Agency have attempted to answer it through guidance on the use of predetermined change control plans for machine learning-enabled medical devices. (Machine learning is considered a sub-type of AI.)

The idea is that manufacturers create such plans at the outset of the regulatory pathway, detailing anticipated changes in the product over time. The initial regulatory approval then covers the changes outlined in this plan, allowing for rapid changes and improvements to systems within requiring a recertification process each time. This serves to front load some of the work for medical device manufacturers seeking approval of AI products. But for Dennis, the evolving regulatory picture on AI does present one notable area in which such companies will need to step up – namely, post-market activities.

“With medical devices, you have always got to be consistently on alert, and that is your quality management system. But the thing is that most companies are not very good at this,” she says. “They go through the process of getting certificates for their devices, and they get the certificate and get the champagne out, and they sit back. But actually, it really doesn’t end there.”

Nowhere is that truer than with AI, given products will be constantly evolving by their very nature. “Medical device companies are going to have to really upskill in post-market surveillance and monitoring, to be able to show they’re confident their devices are working as they should,” suggests Dennis.

It is not the only likely skill gap. Data privacy may also present an area in which manufacturers will need to build new understandings. That’s because clause 69 of the EU act states that the right to privacy and to protection of data “must be guaranteed throughout the entire life cycle of the AI system”.

“But companies normally have engineers who have become regulatory professionals. What they don’t have are data privacy professionals,” says Dennis. “So they are going to have to hire those into their regulatory teams.”

There are also likely to be complicated conversations to be had about data sharing in the context of AI-supported medical devices. For manufacturers (referred to as “providers” in the EU act) to be able to monitor their devices, they need data. Yet that data is collected by the hospitals or clinics using said devices (referred to as ‘deployers’ in the act).

“Basically, unless you have obligations on your deployer to provide you with that data, and to do the surveys to let you have access to your patients, you are not going to be able to get the data that you need for your post-market surveillance,” says Dennis.

Meanwhile, the deployer also has obligations to ensure the safe use of devices, and they cannot do that without support from manufacturers. Yet the support needed in the context of AI may not necessarily be something there is currently the expertise to provide.

“Reps from manufacturers are going to have to be trained in looking for signs that indicate problems with AI systems, trained in what they have to check, and are going to need to do more than just watch it being used,” suggests Dennis. “They’re going to have to be able to grab the data to be able to do trend reporting for their devices.”

Data privacy, transparency and post-market surveillance

In addition, the EU act introduces a separate but linked requirement to be transparent about how AI devices work. “I think writing the documentation that goes with an AI system is going to become an art in itself: explaining it, how it works, what it does, what it doesn’t do, and things it might do but you’re not sure about,” says Dennis. “I think that’s going to be a slightly uncomfortable process.”

With the measures of the EU act due to be fully implemented from August 2026, this and related processes are ones with which manufacturers need to start grappling. For Corman, doing so is increasingly part of such companies making their best possible contribution to society.

“Medical device manufacturers help make the world a better place and help save lives,” Corman says. “But there is a cost to connectivity of devices, and there is cost to complexity. I think manufacturers need to make sure they know what that cost is, and be sure that they include the agency and values of patients they claim to serve in the course of their design. We need humane technology.”


EU Artificial Intelligence Act: four-point summary

1. The AI Act classifies AI according to its risk:

  • Unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI).
  • Most of the text addresses high-risk AI systems, which are regulated.
  • A smaller section handles limited risk AI systems, subject to lighter transparency obligations: developers and deployers must ensure that end users are aware that they are interacting with AI (chatbots and deepfakes).
  • Minimal risk is unregulated (including the majority of AI applications currently available on the EU single market, such as AI-enabled video games and spam filters – at least in 2021; this is changing with generative AI).

2. The majority of obligations fall on providers (developers) of high-risk AI systems:

  • Those that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country.
  • And also third-country providers where the high-risk AI system’s output is used in the EU.

3. Users are natural or legal persons that deploy an AI system in a professional capacity, not affected end users.

  • Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers).
  • This applies to users located in the EU, and third-country users where the AI system’s output is used in the EU.

4. General purpose AI (GPAI):

  • All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive and publish a summary about the content used for training.
  • Free and open licence GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk.
  • All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.

Source: Abstract from EU Artificial Intelligence Act/artificialintelligenceact.eu/high-level-summary