
When the Artificial Intelligence (AI) Act gained approval from the European Parliament in March, it was heralded as a landmark piece of legislation. The first of its kind anywhere in the world, the AI Act promises to bring some regulatory clarity into this fastemerging field. It establishes a common legal framework for AI across the European Union, including stringent requirements for some applications – and an outright ban on others.
In essence, the AI Act treats AI systems in the same manner as any other industrial products, while bringing in some new AI-specific obligations. With a view to preventing obsolescence, it uses a deliberately broad new definition of AI: any ‘machine-based system that is designed to operate with varying levels of autonomy’ and can adapt after deployment, generating outputs like predictions or decisions. Because these regulations are multi-sectoral they will apply to everyone: from a carmaker considering their vehicle safety functions, to a financial institution using a credit scoring model. However, given the inroads that AI has made into healthcare, life sciences companies will stand to be some of the businesses most affected.
The rules lay out four tiers of risk, from ‘unacceptable risk’ at the top (like social scoring by governments) to limited or minimal risk at the bottom (AI-enabled video games). In the middle sit medical devices, many of which are deemed ‘high risk’ by definition.
“Like many other regulated products, AI systems will need to have a CE marking if they pose a high risk to safety, high risk to health, or high risk in terms of fundamental rights,” explains Vladimir Murovec, the Belgium head of life sciences regulatory at Osborne Clarke. “That’s also true if your product is already regulated on the European market – for instance, if it’s also a medical device algorithm or an in vitro diagnostic software.”
To put it differently, medical device manufacturers will clearly need to pay attention to the rules. But they certainly won’t be the only ones affected. A notable feature of the AI Act is its extension of accountability to ‘deployers of AI systems within the supply chain’ – in practice meaning that anyone who uses these technologies for business purposes will be subject to additional scrutiny.
“The impact goes through the entire supply chain, from AI being used in the preclinical stages of clinical trials, to predicting market trends,” Murovec stresses. “Healthcare professionals, care centres, dental clinics, and any medtech company that uses AI in a business context, will all be subject to the ‘deployer’s obligation’. So this goes really far in terms of scope, and I think the impact is going to be quite deep.”
Innovation vs regulation
For the past few years, medical devices within the EU have been regulated by the Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR), which came into force in May 2021 and May 2022 respectively. To an extent, notes Alexander Olbrechts, director of digital health at MedTech Europe, there is some overlap between the existing rules and the new AI Act.
“Examples of elements addressed in both regulations include risk management and quality management requirements, technical documentation and the need for undergoing conformity procedures,” says Olbrechts. “It is critical that, per the AI Act’s Article 8.2, the duplicative or additional requirements of the AI Act can simply be integrated into existing MDR/IVDR processes and procedures and existing documentation.”
By way of example, AI-based medical device software is already regulated by the MDR/IVDR. Under these regulations, manufacturers need to submit the software to a so-called ‘Notified Body’ – responsible for assessing its safety and performance. The hope is that, under the AI Act, existing MDR/ IVDR software codes will be maintained.
“This will be a key instrument to mitigate the risk of additional and unnecessary assessments, and by extension avoid any barrier to innovation in the European medical technology sector,” Olbrechts adds. “If we can arrive at a point whereby the AI Act and MDR/IVDR work seamlessly and complimentarily, it will go a long way to generate that trust in AI-enabled medical technologies.”
Indeed, the regulators have made a concerted effort not to create unnecessary burdens for businesses. Declaring that AI “can contribute to solving” a range of societal challenges, they’ve made it clear that they do not want to stifle innovation or delay market entry for emergent technologies. At the same time, hitting the brakes for a time may not always be a bad thing. After all, the main idea behind the AI Act is to ensure that AI systems are safe, ethical and accessible – a situation the industry would likely favour even if it meant more bureaucracy.
“When you’re talking about healthcare, in the same way as when you’re talking about driverless cars, you want to make sure that its safety has been thoroughly interrogated,” points out Will James, the international sector head of life sciences and healthcare at Osborne Clarke.
As well as contending with the ‘deployer’s obligation’ for the first time, life sciences companies will face new obligations around accuracy, cybersecurity, monitoring and transparency, extending throughout the whole supply chain. Providers will need to go through additional pre-market assessments to build up their technical documentation, while new Notified Bodies will have to be accredited too. Existing manufacturers, whose products are already on the market, will equally be obliged to conduct a thorough review to make sure their applications comply. Beyond that, there are various areas of uncertainty on which the industry will be seeking further guidance. For one thing, it isn’t yet clear whether devices deployed in clinical trials will need to be certified by the AI Act beforehand, or whether they will qualify for a so-called ‘research exemption’.
€35m
The top fine payable (or 7% of global annual revenue) for breaching the AI Act.
Wilmer Hale
“The AI Act’s research exemption tells you, ‘well, actually, you don’t need to comply with the new regulation if your AI system is being specifically developed and put into service solely for research purposes,’” says Murovec. “The impact of this exemption will be different for pharma and medtech. And the medtech and diagnostic sector is being cautious because AI systems that are medical devices will be ‘high risk’.”
The upshot is that, for better or worse, this may indeed create some extra work for manufacturers. After all, despite the regulators’ best intentions, it would be rare to find a new set of regulatory requirements that did not cause some procedural delays. The AI Act is no exception. “The informal feedback we’re getting,” says Murovec, “is a bit of fear that this will indeed slow everything down.”
Embedded vs non-embedded AI
An interesting feature of the new rules is that they apply to ‘non-embedded’ AI just as much as they do ‘embedded’ systems. In simple terms, embedded AI is physically integrated into a product, whereas non-embedded AI isn’t. The healthcare sector contains many examples of non-embedded AI, including AI-powered symptom checkers and AI modules analysing electronic health records.
August 1st 2026
The date that the AI Act becomes fully applicable, in most cases.
European Union
While conceding that the distinction around embedded AI may seem slightly archaic, Murovec nonetheless thinks it’s an important clarification.
“For example,” he says, “we have clients who have very specific computer hardware, and unloaded on that hardware is software that interrogates images and then uses an algorithm to predict the likelihood of a heart attack. Clearly, that is loaded onto a physical system. But equally, it doesn’t need to be. The cloud-based set-up is going to be just as important, if not more so, in the future.”
$15.7tn
The potential contribution to the global economy from AI by 2030.
PwC
Embedded AI, for its part, is no less important, covering devices as varied as wearable health monitors, medical robots and diagnostics equipment. Either way, these are the types of technologies that are often already covered by the MDR/IVDR, and the ones that could face duplicative or conflicting requirements under the AI Act. As the main European trade organisation representing the industry’s interests, MedTech Europe is therefore focusing its energies here.
“This remains our primary concern,” emphasises Olbrechts, “and we continue to advocate for clear interplay between the AI Act’s requirements for high-risk AI systems and those of MDR/IVDR.”
Staying relevant
While AI is clearly a broad and fast-developing category, the AI Act has been crafted with an ambitious goal in mind: to remain applicable and effective as technology evolves.
“I think we’ll capture a lot of scenarios,” says James. “And, of course, the European Commission is empowered to enact guidelines on certain questions, which can be updated depending on technologies’ advancements. So when we look to the future and what could be placed on the market in ten, 20 years, I think it will still probably be relevant.”
We obviously can’t say for sure what’s coming down the line – especially when it comes to something as dynamic and unpredictable as AI. That said, the new regulation should be able to accommodate a wide range of future scenarios, as algorithms become ever more pervasive across healthcare.
Given how much there is to play for – not least enhanced diagnosis, more equitable health access, and improved connectivity among doctors – Murovec hopes that the AI Act will serve to foster greater trust in these technologies.
“I think it will increase transparency, it will increase accessibility, and it will increase literacy in AI technology,” he says. “I think that’s really important for patients. It’s also essential for healthcare providers, even though it might take some time before we get there.”