Software knows best

31 May 2022



Regulating software is one thing, but when it’s designed to think for itself, it’s quite another. AI isn’t new to medical devices, but as products that use it grow more ambitious, regulators need a framework that prioritises patient safety without stifling progress. Lynette Eyb asks Pat Baird, head of global software standards at Philips, and Sara Gerke, associate professor at Penn State Dickinson Law, where the challenges to regulating AI lie and what such a framework might look like.


Artificial intelligence (AI) and machine learning (ML) could unlock unimaginable potential in the medical devices field. It’s at the forefront of a new wave of device development that touches on almost every aspect of medicine, from early-stage clinical development and trials through to pharmacovigilance.

The very concept of “software as a medical device” is just as limitless in its scope: smartphones used to view MRI images, software that aids cancer diagnostics, biosensors and implants, tracking apps, statistical modelling and wearable tech – all could reap the benefits of a deeper integration of AI/ML. But all this potential is matched by the complexity of introducing the technology into the market. Unlike a traditional device that is designed, approved, manufactured, and marketed in a rigid way, a device that uses AI/ML has the capacity to evolve much more fluidly. The vast amounts of data collected throughout the day-to-day as the device is used in patient care has the potential to trigger modifications. For example, new information and insights may result in a change to an algorithm, which would technically result in a change to the device itself. Inevitably, regulating products based on technology that’s in a constant state of flux is proving a thorny issue. Regulators and industry players on both sides of the Atlantic are scrambling to develop and agree upon rules that provide a path to market for this new generation of medical devices.

Action plan

The US Food and Drug Administration (FDA) released an action plan covering AI- and ML-based software that’s used as a medical device in January 2021. Almost two years in the making, it followed stakeholder feedback from an April 2019 discussion paper. Together with Health Canada and the UK Medicines and Healthcare products Regulatory Agency (MHRA), the FDA has also pinpointed ten “guiding principles” to inform the development of ML use. The International Coalition of Medicines Regulatory Authorities, meanwhile, has put forth recommendations to harmonise AI/ML regulations across the devices landscape.

But discussions about regulation cover not only technical and safety issues, but also data protection, liability and trust. Sara Gerke, assistant professor of law at Penn State Dickinson Law, specialises in this area, and she hasn’t been surprised by the time and energy expended on finding a balance between ensuring patient safety and providing developers with the flexibility to evolve their technology.

“Solving the regulatory issues raised by medical AI is complex,” Gerke says. “In particular, it will be a challenge to develop a framework for adaptive AI/ML. If the AI/ML can continuously learn from new data, it will be essential to have an ongoing monitoring system in place to ensure the device remains safe, effective and free from bias.”

Pat Baird, head of global software standards at Philips, agrees that the very nature of the beast complicates regulations, adding that the technology itself “puts new stress” on existing ways of working. “We already have processes in place for change control, but since AI can learn and improve itself quickly, having a streamlined change process will help us improve product performance and help more patients at a faster pace than before,” he says. “This is not a challenge in deploying the technology, but rather a challenge in how to maximise the benefits of the technology.”

Baird says the pace of adoption of AI/ML in devices has been hindered by the pandemic. “Things have been slower in the past few years than I originally anticipated, but this is mainly due to the strain that Covid has placed on the healthcare ecosystem,” he says. “Some of the time that people could have used working on AI, was spent managing Covid.” Even so, he warns against any move to fast-track the technology. “I don’t think that we want to rush the adoption – we need to take a thoughtful approach, identify the gaps, and work together to try to solve those issues,” he says.

Gerke agrees. “Of course, regulation should not stifle innovation, [but] it is a delicate balancing act. It is important these devices are safe and effective, and that regulation adequately protects patients from harm,” she says, explaining it’s a simple fact that the law often lags behind technology. “It will be important to update the regulatory framework now and at regular intervals if needed. Even if the law is behind, it usually catches up and needs to be revised again because of new developments.”

If the AI/ML can continuously learn from new data, it will be essential to have an ongoing monitoring system in place to ensure the device remains safe, effective and free from bias.”

Sara Gerke

Quality is crucial

On the surface, the fastest way to accommodate AI would be to absorb it into existing regulations that govern devices. “I believe AI fits into the current regulatory framework for medical devices,” says Gerke. “But within this framework, we need specific requirements for AI-based devices, and regulators need to develop standards specifically for AI.” According to Baird, many existing processes could be adapted and re-used to safeguard data – and product – quality. “Everyone knows the saying ‘garbage in, garbage out’ and the quality of the data used to train and test AI/ML systems has a direct impact on product performance,” he says, adding that it should be possible to look at the key success factors in supplier quality and see how the principles behind them could be applied to data quality.

Quality is crucial to building patient and carer confidence – and boosting uptake. “If AI works, most people will likely want to use it,” says Gerke. “It will therefore be important that AI-based devices are responsibly developed with an ‘ethics by design’ approach and that they are safe and effective to use, improve patient outcomes and fit into hospital workflows.” Baird believes ML systems will need a “proactive” approach when it comes to monitoring product performance over time, and that there could be lessons to be learned from other sectors. “We need to know if performance suddenly changes. In cybersecurity, there is a very proactive approach to monitoring products for new threats, perhaps we can use their general approach to things to help in the development of ML post-market monitoring.”

Another element of introducing AI/ML-based devices that depends on quality and reliability is patient buy-in. Patients have historically been amenable to adopting everything from artificial implants to pumps that manage insulin flow – providing the technology is proven and safe. “The amount of buy-in needed [from patients] will vary by the risk and benefit of the application,” says Baird. “For low-risk AI, the amount of trust needed is much lower than for higher risk applications.

For some of the low-risk items, I think we are already there – patients might not even realise there is AI in [certain] applications.” He also notes the crucial role of the healthcare professional in securing patient buy-in. “Many patients will trust their physician – if the physician believes in the product, then the patient will too. Some patients will, of course, have their own opinions and building confidence with them will also be important to the long-range success of this technology.”

One of the key challenges to securing patient buyin will be public messaging and education. There is a danger people will read about AI failures and generalise that to all applications. “For example, a crash involving an autonomous vehicle will affect the trust in AI across all applications,” says Baird. “People won’t care that the problem was a software issue for a certain model car running on a certain revision of the software; people will think ‘we can’t trust any of this technology in any application’.”

Legal liability

In terms of other challenges, Baird says work needs to be done on the legal side. “People have questions about legal liability – if the user decides to take action due to what the AI is indicating, and if it ends up being wrong, who is liable? Is it the application’s fault? Or the user for not knowing better?” Gerke, who served as Research Fellow on Harvard Law School’s Project on Precision Medicine, Artificial Intelligence and the Law, also picks up on this point. “To promote the implementation of AI in healthcare, it will be crucial to properly balance liability risks among stakeholders,” she says. “Insurance can play an important role in this regard.”

For Baird, the potential of the technology far outweighs any challenges – be they regulatory or otherwise. He cites the response from a nurse after he asked what had changed in the sector over the last decade. “Her reply was that she was spending too much time in administrative tasks at her workstation rather than actually taking care of patients,” he says. “She had become a caregiver to take care of people, yet she was spending all her time taking care of machines.”

This, he explains, is where AI-driven devices will come into their own. “The ML system can take care of administrative tasks [and] can help improve efficiencies – the machine can be good at doing what machines do, freeing up time for caregivers to give care.”


Ten guiding principles from UK, US and Canada regulators:

1. The regulators recommend using multidisciplinary expertise throughout the total product life cycle of an AI/ML device. This entails understanding of how an AI/ML model should be integrated into clinical workflow, as well as of benefits and risks to users and patients over the full course of the device’s life cycle.

2. Implementation of good software engineering and security practices should be ensured, covering risk and data management as well as design processes robust enough to support integrity and authenticity of data.

3. Manufacturers and sponsors should ensure that clinical study participants as well as data sets effectively represent intended patient populations. Data collection protocols should be designed and calibrated to capture relevant patient population characteristics.

4. AI/ML medical device manufacturers and clinical study sponsors must keep training data sets independent of test sites. In other words, training datasets should be kept separate from testing site datasets.

5. Developers should base selected reference datasets on best available methods. This effort helps ensure collection of clinically relevant data, and that any reference data limitations are understood.

6. Model design should reflect the AI/ML device’s intended use, and reflect available pertinent data. Performance goals for testing should be based on well-understood clinical benefits and risks.

7. Manufacturers’ human-AI teams warrant appropriate focus. Human factors and usability are crucial considerations alongside model performance, according to the principle.

8. Manufacturers should ensure that model testing accurately demonstrates device performance under clinically relevant conditions, including intended patient populations, human-AI team interactions, measurement inputs and potential confounding factors.

9. Clear and essential information should be provided to users. AI/ML device developers and manufacturers should provide relevant, clear and accessible information to intended users such as patients or healthcare providers. Such data includes instructions for use; performance of the AI/ML model for appropriate subgroups as well as model training and testing data characteristics. Processes to provide device updates based on real-world performance monitoring should also be in place.

10. Performance monitoring of deployed models should be carried out in order to uphold or improve safety and performance, accompanied by periodic or continuous model training for more effective risk management.

Source: www.emergobyul.com

AI-based devices should follow an ‘ethics by design’ approach to assure people that they are safe to use.


Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.