Main Content

Hyperspectral imaging measures the spatial and spectral characteristics of an object by imaging it at different wavelengths. The wavelength range extends beyond the visible spectrum and covers from ultraviolet (UV) to long wave infrared (LWIR) wavelengths. The most popular are the visible, near-infrared, and mid-infrared wavelength bands. A hyperspectral imaging sensor acquires several images with narrow and contiguous wavelengths within a specified spectral range. Each of these images contains more subtle and detailed information.

Hyperspectral image processing involves representing, analyzing, and interpreting information contained in the hyperspectral images.

The values measured by a hyperspectral imaging sensor are stored to a binary data file by using band sequential (BSQ), band-interleaved-by-pixel (BIP), or band-interleaved-by-line (BIL) encoding formats. The data file is associated to a header file that contains ancillary information (metadata) like sensor parameters, acquisition settings, spatial dimensions, spectral wavelengths, and encoding formats that are required for proper representation of the values in the data file.

For hyperspectral image processing, the values read from the data file are
arranged into a three-dimensional (3-D) array of form
*M*-by-*N*-by-*C*, where
*M* and *N* are the spatial dimensions of the
acquired data, *C* is the spectral dimension specifying the number
of spectral wavelengths used during acquisition. Thus, you can consider the 3-D
array as a set of two-dimensional (2-D) monochromatic images captured at varying
wavelengths. This set is known as the *hyperspectral data cube*
or *data cube*.

The `hypercube`

function constructs the data cube by reading the data file
and the metadata information in the associated header file. The `hypercube`

function creates a `hypercube`

object and
stores the data cube, spectral wavelengths, and the metadata to its properties. You
can use the `hypercube`

object as input to all other functions in
the Image Processing Toolbox™ Hyperspectral Imaging Library.

**Color Representation of Data Cube**

To visualize and understand the object being imaged, it is useful to represent the
data cube as a 2-D image by using color schemes. The color representation of the
data cube enables you to visually inspect the data and supports decision making. You
can use the `colorize`

function to compute the Red-Green-Blue (RGB), false-color,
and color-infrared (CIR) representations of the data cube.

The RGB color scheme uses the red, green, and blue spectral band responses to generate the 2-D image of the hyperspectral data cube. The RGB color scheme brings a natural appearance, but results in a significant loss of subtle information.

The false-color scheme uses a combination of any number of bands other than the visible red, green, and blue spectral bands. Use false-color representation to visualize the spectral responses of bands outside the visible spectrum. The false-color scheme efficiently captures distinct information across all spectral bands of hyperspectral data.

The CIR color scheme uses spectral bands in the NIR range. The CIR representation of a hyperspectral data cube is particularly useful in displaying and analyzing vegetation areas of the data cube.

The hyperspectral imaging sensors typically have high spectral resolution and low
spatial resolution. The spatial and the spectral characteristics of the acquired
hyperspectral data are characterized by its pixels. Each pixel is a vector of values
that specify the intensities at a location
(*x*,*y*) in *z* different
bands. The vector is known as the *pixel spectrum*, and it
defines the spectral signature of the pixel located at
(*x*,*y*). The pixel spectra are important
features in hyperspectral data analysis. But these pixel spectra gets distorted due
to factors such as sensor noise, atmospheric effects, and low resolution.

You can use the `denoiseNGMeet`

function to remove noise from a hyperspectral data by
using the non-local meets global approach.

To enhance the spatial resolution of a hyperspectral data, you can use image
fusion methods. The fusion approach combines information from the low resolution
hyperspectral data with a high resolution multispectral data or panchromatic image
of the same scene. This approach is also known as *sharpening*
or *pansharpening* in hyperspectral image analysis.
Pansharpening specifically refers to fusion between hyperspectral and panchromatic
data. You can use the `sharpencnmf`

function for sharpening hyperspectral data using coupled
non-matrix factorization method.

To compensate for the atmospheric effects, you must first calibrate the pixel values, which are digital numbers (DNs). You must preprocess the data by calibrating DNs using radiometric and atmospheric correction methods. This process improves interpretation of the pixel spectra and provides better results when you analyse multiple data sets, as in a classification problem. For information about radiometric calibration and atmospheric correction methods, see Hyperspectral Data Correction.

The other preprocessing step that is important in all hyperspectral imaging
applications is *dimensionality reduction*. The large number of
bands in the hyperspectral data increases the computational complexity of processing
the data cube. The contiguous nature of the band images results in redundant
information across bands. Neighboring bands in a hyperspectral image have high
correlation, which results in spectral redundancy. You can remove the redundant
bands by decorrelating the band images. Popular approaches for reducing the spectral
dimensionality of a data cube include band selection and orthogonal transforms.

The

*band selection*approach uses orthogonal space projections to find the spectrally distinct and most informative bands in the data cube. Use the`selectBands`

and`removeBands`

functions for the finding most informative bands and removing one or more bands, respectively.*Orthogonal transforms*such as principal component analysis (PCA) and maximum noise fraction (MNF), decorrelate the band information and find the principal component bands.PCA transforms the data to a lower dimensional space and finds principal component vectors with their directions along the maximum variances of the input bands. The principal components are in descending order of the amount of total variance explained.

MNF computes the principal components that maximize the signal-noise-ratio, rather than the variance. MNF transform is particularly efficient at deriving principal components from noisy band images. The principal component bands are spectrally distinct bands with low interband correlation.

The

`hyperpca`

and`hypermnf`

functions reduce the spectral dimensionality of the data cube by using the PCA and MNF transforms respectively. You can use the pixel spectra derived from the reduced data cube for hyperspectral data analysis.

In a hyperspectral image, the intensity values recorded at each pixel specify the
spectral characteristics of the region that the pixel belongs to. The region can be
a homogeneous surface or heterogeneous surface. The pixels that belong to a
homogeneous surface are known as *pure pixels*. These pure
pixels constitute the *endmembers* of the hyperspectral
data.

Heterogeneous surfaces are a combination of two or more distinct homogeneous
surfaces. The pixels belonging to heterogeneous surfaces are known as
*mixed pixels*. The spectral signature of a mixed pixel is
a combination of two or more endmember signatures. This spatial heterogeneity is
mainly due to the low spatial resolution of the hyperspectral sensor.

*Spectral unmixing* is the process of
decomposing the spectral signatures of mixed pixels into their constituent
endmembers. The spectral unmixing process involves two steps:

*Endmember extraction*— The spectra of the endmembers are prominent features in the hyperspectral data and can be used for efficient spectral unmixing, segmentation, and classification of hyperspectral images. Convex geometry based approaches, such as pixel purity index (PPI), fast iterative pixel purity index (FIPPI), and N-finder (N-FINDR) are some of the efficient approaches for endmember extraction.Use the

`ppi`

function to estimate the endmembers by using the PPI approach. The PPI approach projects the pixel spectra to an orthogonal space and identifies extrema pixels in the projected space as endmembers. This is a non-iterative approach, and the results depend on the random unit vectors generated for orthogonal projection. To improve results, you must increase the random unit vectors for projection, which can be computationally expensive.Use the

`fippi`

function to estimate the endmembers by using the FIPPI approach. The FIPPI approach is an iterative approach, which uses an automatic target generation process to estimate the initial set of unit vectors for orthogonal projection. The algorithm converges faster than the PPI approach and identifies endmembers that are distinct from one another.Use the

`nfindr`

function to estimate the endmembers by using the N-FINDR method. N-FINDR is an iterative approach that constructs a simplex by using the pixel spectra. The approach assumes that the volume of a simplex formed by the endmembers is larger than the volume defined by any other combination of pixels. The set of pixel signatures for which the volume of the simplex is high are the endmembers.

*Abundance map estimation*— Given the endmember signatures, it is useful to estimate the fractional amount of each endmember present in each pixel. You can generate the abundance maps for each endmember, which represent the distribution of endmember spectra in the image. You can label a pixel as belonging to an endmember spectra by comparing all of the abundance map values obtained for that pixel.Use the

`estimateAbundanceLS`

function to estimate the abundance maps for each endmember spectra.

Interpret the pixel spectra by performing *spectral
matching*.Spectral matching identifies the class of an endmember material
by comparing its spectra with one or more reference spectra. The reference data
consists of pure spectral signatures of materials, which are available as spectral
libraries.

Use the `readEcostressSig`

function to read the reference spectra files from
the ECOSTRESS spectral library. Then, you can compute the similarity between the
files in the ECOSTRESS library spectra and an endmember spectra by using the
`spectralMatch`

function.

The geometrical characteristics and the probability distribution values of the pixel spectra are the important features for spectral matching. You can improve the matching efficiency by combining both the geometrical and probabilistic characteristics. Such combination measures have higher discrimination capabilities than the individual approaches and are more suitable for discriminating spectrally similar targets (intra-species). This table lists the functions available for computing the spectral matching score.

Method | Description |

`sam` | Spectral angle mapper (SAM) matches two spectra based on their geometrical characteristics. The SAM measure computes angle between two spectral signatures. The smaller angle represents best matching between two spectra. This measure is insensitive to illumination changes. |

`sid` | Spectral information divergence (SID) matches two spectra based on their probability distributions. This method is efficient in identifying mixed pixels spectra. Low SID value implies higher similarity between two spectra. |

`sidsam` | Combination of SID and SAM. The SID-SAM approach has better discrimination capability compared to SID and SAM individually. Minimum score implies higher similarity between two spectra. |

`jmsam` | Combination of Jeffries–Matusita (JM) distance and SAM. Low distance values imply higher similarity between two spectra. This method is particularly efficient in discriminating spectrally close targets. |

`ns3` | Normalized spectra similarity score (NS3), which combines Euclidean distance and SAM. Low distance values imply higher similarity between two spectra. This method has high discrimination capability but requires extensive reference data for high accuracy. |

Hyperspectral image processing applications include classification, target detection, anomaly detection, and material analysis.

Segment and classify each pixel in a hyperspectral image through unmixing and spectral matching. For examples of classification, see Hyperspectral Image Analysis Using Maximum Abundance Classification and Classify Hyperspectral Image Using Library Signatures and SAM.

You can perform target detection by matching the known spectral signature of a target material to the pixel spectra in hyperspectral data. For an example, see Target Detection Using Spectral Signature Matching.

You can also use hyperspectral image processing for anomaly detection and material analysis, such as vegetation analysis.

Use the

`anomalyRX`

function to detect anomalies in a hyperspectral image.Use the

`spectralIndices`

function to analyze the spectral characteristics of various materials present in a hyperspectral data.

`anomalyRX`

|`estimateAbundanceLS`

|`hypercube`

|`ndvi`

|`ppi`

|`spectralMatch`