Let there be light

8 May 2020



Hyperspectral imaging is an emerging modality for a range of medical applications, such as disease diagnosis and image-guided surgery. The European Photonics Industry Consortium (EPIC) examines how it works and the strategies that have been employed to optimise its use.


Hyperspectral imaging (HSI) is sometimes called ‘imaging spectroscopy’. This name represents the combination of advantages of the two techniques that characterise HSI – the spatial resolution and the chemical specificity – in one tool.

The concept of combining these two techniques was introduced for the first time in the late 1970s in the field of remote sensing. In the past few years, the hyperspectral imaging application field hugely expanded and it now includes environmental monitoring, smart farming, surveillance, forensic sciences, textile industry, waste management, additive manufacturing, pharmaceutical industry, oil and gas, painting and art. Many European Commissionfunded projects from the Horizon 2020 research and innovation programme, and initiatives of the Photonics Public Private Partnership, are exploiting HSI. The IA project 871783 MULTIPLE is one of them, and will deliver breakthrough and cost-effective snapshot HSI and spectrometric solutions suited to actual industrial monitoring. The coordinator is AIMEN Technological Center, which has partners companies such as Senorics, Photonfocus, imec, Tematys, MRA Alava Group and ThingsO2.

For the scope of this article, the focus will be set on medical applications, and the importance of a smart and effective data processing for a correct interpretation of the hardware products. From this point of view, HSI also has the advantage of being non-contact, non-invasive and non-ionising. Applications in vivo are still challenging because the high dimensionality of the data causes real-time processing issues, but these days ingenious strategies to overcome them are being developed.

How HSI works

Independently to the specific alternative technologies, the hardware of an HSI system consists of a suitable light source, one or more detectors sensitive in the wavelength range(s) of interest, a system to acquire the images, that can be based on a scanning or a non-scanning method, and a data processing system able to elaborate the images and produce a significant information for the specific application. The advantage of this imaging technique is that, compared with an RGB camera – measuring in the three spectral bands of red, green and blue – many thin spectral bands close to each other can be detected, covering an extensive total spectrum ranging from UV to SWIR. In the case of multispectral imaging, the system’s ordering of its acquisition of dozens of spectral bands (typically with FWHM of 100–200nm) is not necessarily contiguous. When HSI is used, contiguous spectral bands in the order of 100–1000nm are detected. This allows this last technique to be able to capture fine features and specific compositions as in a spectroscopic analysis.

The information that is acquired by the system is a so-called hyperspectral data cube: a threedimensional (3D) matrix that contains twodimensional (2D) spatial information (X and Y dimension) and a third spectral information (wavelength dimension). This third dimension is the one determining the large size of the information, which makes the data hard to handle even for a powerful computer.

For medical applications, such as in the recognition of cancerous cells from healthy ones, the intrinsic characteristics of the two kinds of tissues are different and can also evolve during the progression of a disease in a certain portion of the organ volume in comparison with adjacent ones. This implies that absorption, reflection and scattering characteristics also change across tissues and an imaging technique can be used as a discriminator based on the detected differences in fluorescence. For diseases affecting organs such as the brain, where it is crucial to distinguish the exact shape and borders of a tumoral formation in order to remove the malignant cells and save the highest amount of healthy ones, this technology could help during medical operation in real time.

Data processing in HSI

In order to handle this amount of data and fasten its elaboration so that the technology can be used for realtime processing, different kinds of machine-learning and in specific deep-learning (DL) strategies have been tested and can be applied depending on the aim of the imaging session. Beforehand, a pre-processing treatment of the raw hyperspectral information needs to be carried out, to avoid that possible artefacts are wrongly interpreted as real features.

The procedure is divided into three steps: image calibration, noise reduction and data normalisation. The calibration takes place using white and dark reference images and is performed pixel-wise. This procedure helps in suppressing the influence of dark current. To overcome the risk of artefacts in the data due to imperfections of the sensors, the associated noise is removed. This can be done via suitable algorithms and in the case of limited performance at the border of the spectral region of sensitivity, the correspondent band can be discarded from the evaluation.

For covering a spectrum from UV to SWIR, it is necessary to use at least two detectors with a sufficient sensitivity overlap in order to safely cover the full spectral range. This is particularly relevant in the case of scanner-based detection, and the moving parts in the set-up and of a non-uniform surface to image, where different illumination on different zones of the area of interest can bias the classification of the images. To avoid this, it is possible to perform a normalisation of the brightness analytically. This specific risk of bias of the results of an HSI-based analysis has to be thoroughly addressed in the immediate future, in order to allow HSI to be officially eligible as a tool in medical applications (FDA – full medical-graded systems).

At this point of the data manipulation, it is necessary to reduce the dimensionality of the hyperspectral images. This is done by carrying out a projection of the hyperspectral cube to a space with only a few dimensions. The real ability here is to choose the right trade-off method to reduce the high-dimensional data into a limited dimensional representation that is reasonably smaller than the starting unit but, at the same time, is still capable of describing all the features accurately enough. Many algorithms exist that can be applied for this purpose. They can be linear or non-linear. One of the most famous linear methods is called principal components analysis (PCA). It is founded on the principal of global linearity and is less suitable for preserving local features, such as continuity or manifold structures, which are fundamental for some applications. Nevertheless, the existing nonlinear techniques are typically based on the local application of the same linear method – for example, nonlinear structures are decomposed into linear subspaces where linear rules of dimensional reduction are performed locally and independently on the different subspaces.

Until this point, no machine-learning technique by definition has been applied, and the choice is open on the most effective method based on the application and the kind of data available. DL is based on artificial neural networks: systems that ‘learn’ to perform tasks by taking into account examples fed from the outside, similarly to how animal brains work. One such algorithm belongs to the family of autoencoders, which are DL-based but still use an unsupervised training aiming at decreasing the dimensionality. Unsupervised learning looks for undetected patterns in a data set without pre-existing labels from humans, which are partially involved in semi-supervised learning and fully involved in supervised learning.

DL-based strategies

One very common approach in computer visions for HSI analysis that makes use of DL are convolutional neural networks (CNNs) in different configurations (spectral, spatial, spectral-spatial). They do not necessarily need a PCA treatment beforehand, but it is possible to refine the input set of data with it. CNNs are particularly suitable in including additional meaningful restrictions in the learning process and they can also work with limited-training data sets.

All different DL-based strategies have the fundamental need of a sufficiently broad source of data sets to not incur into so-called overfitting. When, for example, due to the nature of the clinical setting, limited-training images are available, it can happen that the resulting statistical model describes random error or noise instead of addressing the correct features. To overcome this issue, some techniques have been developed, including the use of RGB images to rapidly obtain an HSI data set via DL. Nevertheless, this so-called spectral upsampling usually gives good results only in the visible spectral range and not in NIR-SWIR. There are other tricks to generate synthetic images and one is called ‘jittering’. This augments the training database by applying a sequence of random transformations to the initial available images. These are then used only for training and not for testing.

Depending on the application, the results of such manipulation of the hyperspectral cubes leads to the processing output. In the case of tissue imaging for real-time monitoring, the operator should be able to distinguish cancerous cells from healthy ones following a clear colour code and a reasonably fast image update. At this point, the algorithms employed in the process need to be evaluated, and three parameters are commonly employed for this: accuracy, sensitivity and specificity. The definition of the three is rigorous and based on the analysis of the true negative, true positive, false positive and false negative cases confirmed by other standard techniques – such as biopsy or MRI – for medical applications.

Today, with the application of different strategies based on DL-architectures, it is possible to process HSI in a few seconds, which is a very promising feature for the upcoming standardisation of realtime implementation of HSI.



Privacy Policy
We have updated our privacy policy. In the latest update it explains what cookies are and how we use them on our site. To learn more about cookies and their benefits, please view our privacy policy. Please be aware that parts of this site will not function correctly if you disable cookies. By continuing to use this site, you consent to our use of cookies in accordance with our privacy policy unless you have disabled them.