Vision and execution

29 May 2023



Automation has brought efficiency and cost benefits across a range of manufacturing industries – but what happens when a mistake can pose a risk to the end user? This is the question medical device manufacturers must answer, something they do with high-tech inspection protocols, ensuring each unit is produced to an exact specification. Andrea Valentino talks to Ken McClannon at Jabil Healthcare, and Professor Anil Bharath at Imperial College London, to understand how these socalled vision systems work and how they might be boosted by machine learning technology.


There are few more quintessentially modern words than automation. Increasingly popular everywhere from computing to cars, a recent Deloitte study found that over 50% of organisations are planning on incorporating automation into their business this year. Examine the numbers and it’s clear that medical device manufacturers are at the centre of this storm. In 2017, research by Emergo found that 53% of medical device manufacturers planned on increasing R&D spending over the coming years, with a more recent study noting that the average producer spent an average of $637,857 on assembly technology in 2021.

Know anything about medical manufacturing – the high distribution costs, the need for speed and efficiency – and this rush towards robotics makes sense. But though factories the world over rely on machines to make their devices, they equally exploit technology to check them for problems. This isn’t particularly surprising. With FDA fines for faulty equipment sometimes rising to eye-watering figures – a few years ago, one Minnesota company was fined $27m for supplying hospitals with defective heart devices – companies are under immense pressure to ensure their products do what they promise. And in a world where factories often stretch across dozens of production lines, it’d be too expensive and timeconsuming to check thousands of devices by hand.

Enter ‘vision systems’ – the broad name for a range of automation technologies, together ensuring everything from syringes to artificial hips reach their patients in prime working order. They’re now so central to the inner workings of device development, in fact, that ‘automation’ is often just as likely to refer to vision systems as to the manufacturing process itself. Nor do the technologies underpinning these platforms show any sign of slowing down. Backed by AI and machine learning, inspection protocols could soon become even more sophisticated – even if manufacturers probably shouldn’t sack their human staff just yet.

Grand visions

It’s tempting to imagine that vision systems developed together with general automation – but seen in a certain light, they’re much older. As long ago as the 1960s the algorithms that modern vision systems rely on were first being honed – while image coding began to be investigated in the 1970s. Whatever their origins, it’s clear that vision systems are now familiar sights on factory floors across medical manufacturing. “We have one medical line, for example, where we have 48 vision systems on it,” says Ken McClannon, a technical business unit director for Jabil Healthcare Pharmaceutical Delivery Systems. “Now there might be 250 stations on the entire line – so 20% of the stations on the line are vision systems.”

With those kinds of numbers, it’s no wonder McClannon says his work really encompasses “automated assembly and testing” as opposed to mere assembly – a fact that other experts are happy to explain. As Anil Bharath, professor of Biologically- Inspired Computation and Inference in the Department of Bioengineering at Imperial, at Imperial College London puts it, the “fast through-puts” vital to modern manufacturing could be impossible with cumbersome manual checks, while plumping for the occasional spot check might put an operation’s quality control (QC) at risk.

But if device safety explains the broad importance of vision systems – shadowed by the fact that outstanding QC can increase company profits by up to 15% – their real ubiquity can be understood from the variety and complexity of modern medical devices. In Mexico, to take one example, industry giant Medtronic has a factory boasting 4,000 workers making everything from catheters to stent grafts.

At the same time, the increasing sophistication of modern medical devices is forcing firms to invest in robust vision systems. At their most technical, devices like eye implants might have hundreds or thousands of tiny pieces, each needing to be flawless in shape and length. The size of devices also makes vision systems a necessity. With defects sometimes the diameter of a single human hair, it’s unsurprising that manufacturers rely on machinery to catch issues humans could never see. In fact – and as the name implies – perhaps the easiest way to understand vision systems is as allseeing eyes, programmed to peer over conveyor belts and warn workers of problems. To make that happen, platforms are equipped with special lights, often in contrast to the surrounding environment, ensuring that the machines can spot scratches or dents. That’s typically echoed by investing in the strongest cameras – with McClannon claiming that resolutions are getting “higher and higher” all the time.

“You can use specific wavelengths, in terms of the colour of light you use and so on, and use multiple different lighting strategies to inspect different features of a device”
Ken McClannon

Rage against the machines

As McClannon’s last comment suggests, vision systems are constantly being sharpened. And if higher resolutions are one area of work – the latest systems can sometimes provide zoom up to 21 megapixels – the Irishman is equally enthusiastic about other improvements. That’s true, for instance, in terms of lighting. “You can get very specific about how you design your lighting for the application,” McClannon explains. “You can use specific wavelengths, in terms of the colour of light you use and so on, and use multiple different lighting strategies to inspect different features of a device.” In a similar vein, McClannon describes the higher memory of modern vision systems, meaning that platforms are cheaper and faster than when he started in automated manufacturing some quarter century ago.

More than that, however, perhaps the most revolutionary development in vision systems is increasing digitalisation, with Bharath arguing that machine learning and AI are just two of the technologies that could transform QC over the coming years. From a theoretical perspective, this isn’t hard to appreciate. If, after all, traditional machine vision platforms are programmed to flag specific and measurable errors in any given device – a needle is too long or a pipe too short – machine learning has the ability to make the whole system far more dynamic.

Trained across thousands of individual samples, such machines can gradually learn what their human masters are looking for, eventually noticing problems of which they wouldn’t even have thought. This fusion of traditional vision systems with AI is called ‘computer vision’, and there’s plenty of evidence that tech companies are rushing ahead to develop it. At IBM, for example, experts have recently unveiled their Maximo Visual Inspection setup. Among other things, it allows users to constantly improve the power of their vision systems, even training them to understand exactly why something goes wrong.

“I look for a unique optical or visual feature, allowing you to make a robust decision for what it is that you’re trying to measure in the first place.”
Ken McClannon

Yet, if machine learning could one day prod vision systems to even greater heights – with the AI-based medical device ecosystem expected to enjoy CAGR of 25.7% through 2027 – McClannon warns that technology isn’t necessarily a panacea. “I look for a unique optical or visual feature, allowing you to make a robust decision for what it is that you’re trying to measure in the first place,” he says, adding that he prefers a “deterministic” approach to QC. Fair enough: with the human cost of faulty equipment potentially fatal, it makes sense that companies might want their QC to deal in absolutes than risking the robots go rogue and claim a fault where none exists. Bharath, for his part, makes a similar point, noting that training machine learning platforms can be cumbersome, requiring manufacturers to supply machines with problematic equipment to ensure they know what they’re looking for.

People power

If insiders are ambiguous about the long-term value of computer vision systems, they’re similarly conscious that humans can never be removed from the equation altogether. To be sure, emphasises Bharath, automation is obviously going to be faster and cheaper than “human observation” in the first instance. But in cases where manufacturers are making small numbers of bespoke devices, potentially tailor-made to individual patients, he suggests it might make more sense to simply check them by hand.

Of course, more mass-produced products can gain much from AI checks, something clear enough if you look at the current buzz of industry activity. Last year, for example, Siemens unveiled SynthAI, a new tool that elegantly dovetails machine vision manufacturing and AI. Among other things, Siemens engineers are experimenting with so-called ‘synthetic data’ – whereby SynthAI is trained on thousands of randomised and computer-generated images.

And if that neatly solves issues around how devices are trained, Siemens is far from the only company moving in a similar direction. Philips, for instance, has developed a platform which allows users to send new parameters to a machine vision system even as the process itself is happening. In the case of Philips, that makes it easy for manufacturers to ensure droplet dispensers are as accurate as possible – but it’s obvious that the practical applications of such technologies transcend specific production lines.

All the same, McClannon argues that companies can’t simply rush towards a machine vision future without first considering use cases. While he concedes that AI is “creeping in” across the industry – especially now that the technology increasingly comes with “standard suites” of functions accessible to all – he stresses that device manufacturers are still most concerned with finding “the right tool for each application.” To put it differently, while machine learning can doubtless bolster QC in many cases, deterministic vision systems are unlikely to disappear.

Given how important device quality is to a manufacturer’s reputation – and to patient wellbeing – that’s surely just as well.


What is computer vision?

Computer vision is a field of artificial intelligence (AI) that enables computers and systems to derive meaningful information from digital images, videos and other visual inputs — and take actions or make recommendations based on that information. If AI enables computers to think, computer vision enables them to see, observe and understand.

Computer vision works much the same as human vision, except humans have a head start. Human sight has the advantage of lifetimes of context to train how to tell objects apart, how far away they are, whether they are moving and whether there is something wrong in an image.

Computer vision trains machines to perform these functions, but it has to do it in much less time with cameras, data and algorithms rather than retinas, optic nerves and a visual cortex. Because a system trained to inspect products or watch a production asset can analyse thousands of products or processes a minute, noticing imperceptible defects or issues, it can quickly surpass human capabilities.



Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.