Hyperspectral imaging


Hyperspectral imaging collects and processes information from across the electromagnetic spectrum. The goal of hyperspectral imaging is to obtain the spectrum for each pixel in the image of a scene, with the purpose of finding objects, identifying materials, or detecting processes. There are three general types of spectral imagers. There are Push broom scanners and the related Whisk broom scanners, which read images over time, band sequential scanners, which acquire images of an area at different wavelengths, and snapshot hyperspectral imagers, which uses a staring array to generate an image in an instant.
Whereas the human eye sees color of visible light in mostly three bands, spectral imaging divides the spectrum into many more bands. This technique of dividing images into bands can be extended beyond the visible. In hyperspectral imaging, the recorded spectra have fine wavelength resolution and cover a wide range of wavelengths. Hyperspectral imaging measures continuous spectral bands, as opposed to multiband imaging which measures spaced spectral bands.
Engineers build hyperspectral sensors and processing systems for applications in astronomy, agriculture, molecular biology, biomedical imaging, geosciences, physics, and surveillance. Hyperspectral sensors look at objects using a vast portion of the electromagnetic spectrum. Certain objects leave unique "fingerprints" in the electromagnetic spectrum. Known as spectral signatures, these "fingerprints" enable identification of the materials that make up a scanned object. For example, a spectral signature for oil helps geologists find new oil fields.

Sensors

Figuratively speaking, hyperspectral sensors collect information as a set of "images." Each image represents a narrow wavelength range of the electromagnetic spectrum, also known as a spectral band. These "images" are combined to form a three-dimensional hyperspectral data cube for processing and analysis, where x and y represent two spatial dimensions of the scene, and λ represents the spectral dimension.
Technically speaking, there are four ways for sensors to sample the hyperspectral cube: spatial scanning, spectral scanning, snapshot imaging, and spatio-spectral scanning.
Hyperspectral cubes are generated from airborne sensors like NASA's Airborne Visible/Infrared Imaging Spectrometer, or from satellites like NASA's EO-1 with its hyperspectral instrument Hyperion. However, for many development and validation studies, handheld sensors are used.
The precision of these sensors is typically measured in spectral resolution, which is the width of each band of the spectrum that is captured. If the scanner detects a large number of fairly narrow frequency bands, it is possible to identify objects even if they are only captured in a handful of pixels. However, spatial resolution is a factor in addition to spectral resolution. If the pixels are too large, then multiple objects are captured in the same pixel and become difficult to identify. If the pixels are too small, then the intensity captured by each sensor cell is low, and the decreased signal-to-noise ratio reduces the reliability of measured features.
The acquisition and processing of hyperspectral images is also referred to as imaging spectroscopy or, with reference to the hyperspectral cube, as 3D spectroscopy.

Scanning techniques

There are four basic techniques for acquiring the three-dimensional dataset of a hyperspectral cube. The choice of technique depends on the specific application, seeing that each technique has context-dependent advantages and disadvantages.

Spatial scanning

In spatial scanning, each two-dimensional sensor output represents a full slit spectrum. Hyperspectral imaging devices for spatial scanning obtain slit spectra by projecting a strip of the scene onto a slit and dispersing the slit image with a prism or a grating. These systems have the drawback of having the image analyzed per lines and also having some mechanical parts integrated into the optical train. With these line-scan cameras, the spatial dimension is collected through platform movement or scanning. This requires stabilized mounts or accurate pointing information to 'reconstruct' the image. Nonetheless, line-scan systems are particularly common in remote sensing, where it is sensible to use mobile platforms. Line-scan systems are also used to scan materials moving by on a conveyor belt. A special case of line scanning is point scanning, where a point-like aperture is used instead of a slit, and the sensor is essentially one-dimensional instead of 2D.

Spectral scanning

In spectral scanning, each 2D sensor output represents a monochromatic, spatial -map of the scene. HSI devices for spectral scanning are typically based on optical band-pass filters. The scene is spectrally scanned by exchanging one filter after another while the platform remains stationary. In such "staring", wavelength scanning systems, spectral smearing can occur if there is movement within the scene, invalidating spectral correlation/detection. Nonetheless, there is the advantage of being able to pick and choose spectral bands, and having a direct representation of the two spatial dimensions of the scene. If the imaging system is used on a moving platform, such as an airplane, acquired images at different wavelengths corresponds to different areas of the scene. The spatial features on each of the images may be used to realign the pixels.

Non-scanning

In non-scanning, a single 2D sensor output contains all spatial and spectral data. HSI devices for non-scanning yield the full datacube at once, without any scanning. Figuratively speaking, a single snapshot represents a perspective projection of the datacube, from which its three-dimensional structure can be reconstructed. The most prominent benefits of these snapshot hyperspectral imaging systems are the snapshot advantage and shorter acquisition time. A number of systems have been designed, including computed tomographic imaging spectrometry, fiber-reformatting imaging spectrometry, integral field spectroscopy with lenslet arrays, multi-aperture integral field spectrometer, integral field spectroscopy with image slicing mirrors, image-replicating imaging spectrometry, filter stack spectral decomposition, coded aperture snapshot spectral imaging, image mapping spectrometry, and multispectral Sagnac interferometry. However, computational effort and manufacturing costs are high. In an effort to reduce the computational demands and potentially the high cost of non-scanning hyperspectral instrumentation, prototype devices based on Multivariate Optical Computing have been demonstrated. These devices have been based on the Multivariate Optical Element spectral calculation engine or the Spatial Light Modulator spectral calculation engine. In these platforms, chemical information is calculated in the optical domain prior to imaging such that the chemical image relies on conventional camera systems with no further computing. As a disadvantage of these systems, no spectral information is ever acquired, i.e. only the chemical information, such that post processing or reanalysis is not possible.

Spatiospectral scanning

In spatiospectral scanning, each 2D sensor output represents a wavelength-coded, spatial -map of the scene. A prototype for this technique, introduced in 2014, consists of a camera at some non-zero distance behind a basic slit spectroscope. Advanced spatiospectral scanning systems can be obtained by placing a dispersive element before a spatial scanning system. Scanning can be achieved by moving the whole system relative to the scene, by moving the camera alone, or by moving the slit alone. Spatiospectral scanning unites some advantages of spatial and spectral scanning, thereby alleviating some of their disadvantages.

Distinguishing hyperspectral from multispectral imaging

Hyperspectral imaging is part of a class of techniques commonly referred to as spectral imaging or spectral analysis. The term "hyperspectral imaging" derives from the development of NASA's Airborne Imaging Spectrometer and AVIRIS in the mid-1980s. Although NASA prefers the earlier term "imaging spectroscopy" over "hyperspectral imaging," use of the latter term has become more prevalent in scientific and non-scientific language. In a peer reviewed letter, experts recommend using the terms "imaging spectroscopy" or "spectral imaging" and avoiding exaggerated prefixes such as "hyper-," "super-" and "ultra-," to prevent misnomers in discussion.
Hyperspectral imaging is related to multispectral imaging. The distinction between hyper- and multi-band is sometimes based incorrectly on an arbitrary "number of bands" or on the type of measurement. Hyperspectral imaging uses continuous and contiguous ranges of wavelengths whilst multiband imaging uses a subset of targeted wavelengths at chosen locations.
Multiband imaging deals with several images at discrete and somewhat narrow bands. Being "discrete and somewhat narrow" is what distinguishes multispectral imaging in the visible wavelength from color photography. A multispectral sensor may have many bands covering the spectrum from the visible to the longwave infrared. Multispectral images do not produce the "spectrum" of an object. Landsat is a prominent practical example of multispectral imaging.
Hyperspectral deals with imaging narrow spectral bands over a continuous spectral range, producing the spectra of all pixels in the scene. A sensor with only 20 bands can also be hyperspectral when it covers the range from 500 to 700 nm with 20 bands each 10 nm wide, while a sensor with 20 discrete bands covering the visible, near, short wave, medium wave and long wave infrared would be considered multispectral.
In addition, some formal standards define HSI using a minimum number of spectral channels. For example, the IEEE 4001 standard for hyperspectral data characterization defines hyperspectral imagery as containing 30 or more spectral bands, primarily for data interoperability, system classification, and metadata standardization.
Ultraspectral could be reserved for interferometer type imaging sensors with a very fine spectral resolution. These sensors often have a low spatial resolution of several pixels only, a restriction imposed by the high data rate.