Barriers to AI in healthcare systems

29 January 2019

Given the effectiveness of AI enabled software in other domains, AI has been introduced into many areas of medicine. One such application is clinical decision support (CDS) software, which falls into the category of software as a medical device (SaMD). In light of the prevalence of diagnostic errors and the potential impact on patients, there is an increasing drive for technologies which can support these decisions. It is estimated that these mistakes account for almost 60% of all medical errors and result in an estimated 40,000 to 80,000 deaths each year.

AI offers huge potential to reduce this error rate by being able to quickly scan patient data and deliver diagnoses efficiently and more accurately than humans. However, there are a number of barriers preventing this vision from becoming a reality.

The Duke Margolis Centre for Health Policy published a white paper identifying some of the challenges facing the integration of AI into such contexts. The first of these is the need for more evidence demonstrating the effectiveness of these technologies, including the impact of the software on patient outcomes, care quality, costs of care, workflow, the usability of the software, its ability to provide useful and trustworthy information and the potential for reimbursement of these products by payers.

The second barrier identified is the requirement of more data about the risk to patients of such technologies. The degree to which a software product comes with information that explains how it works and the types of populations used to train the software will have significant impact on regulators’ and clinicians’ assessment of the risk to patients. The benefits of continuous learning versus locked models need to be discussed.

The final challenge identified in the report pertains to the ethical implications of using AI. It is essential that these technologies are ethically trained and flexible. Efforts to ensure that AI does not perpetuate or exacerbate potential biases already held by health professionals are imperative. Furthermore, developers must assess the ability to apply data inputs to other settings different to the original in order to assess to determine the scalability of these technologies. The report also suggests that best practices and potentially new paradigms are needed to optimise patient privacy. In light of the lack of clarity around the regulation of SaMD, it is essential to treat carefully with AI technologies to ensure that they can fulfil their potential and improve patient outcomes.  



Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.