Radar Waveform Classification Using Deep Learning

This example shows how to classify radar waveform types of generated synthetic data using the Wigner-Ville distribution (WVD) and a deep convolutional neural network (CNN).

Modulation classification is an important function for an intelligent receiver. Modulation classification has numerous applications, such as cognitive radar and software-defined radio. Typically, to identify these waveforms and classify them by modulation type it is necessary to define meaningful features and input them a classifier. While effective, this procedure can require extensive effort and domain knowledge to yield an accurate classification. This example explores a framework to automatically extract time-frequency features from signals and perform signal classification using a deep learning network.

The first part of this example simulates a radar classification system that synthesizes three pulsed radar waveforms and classifies them. The radar waveforms are:

  • Rectangular

  • Linear frequency modulation (LFM)

  • Barker Code

A radar classification system does not exist in isolation. Rather, it resides in an increasingly occupied frequency spectrum, competing with other transmitted sources such as communications systems, radio, and navigation systems. The second part of this example extends the network to include additional communication modulation types. In addition to the first set of radar waveforms, the extended network synthesizes and identifies these communication waveforms:

  • Gaussian frequency shift keying (GFSK)

  • Continuous phase frequency shift keying (CPFSK)

  • Broadcast frequency modulation (B-FM)

  • Double sideband amplitude modulation (DSB-AM)

  • Single sideband amplitude modulation (SSB-AM)

This example primarily focuses on radar waveforms, with the classification being extended to include a small set of amplitude and frequency modulation communications signals. See Modulation Classification with Deep Learning (Communications Toolbox) for a full workflow of modulation classification with a wide array of communication signals.

Generate Radar Waveforms

Generate 3000 signals with a sample rate of 100 MHz for each modulation type. Use phased.RectangularWaveform for rectangular pulses, phased.LinearFMWaveform for LFM, and phased.PhaseCodedWaveform for phase coded pulses with Barker code.

Each signal has unique parameters and is augmented with various impairments to make it more realistic. For each waveform, the pulse width and repetition frequency will be randomly generated. For LFM waveforms, the sweep bandwidth and direction are randomly generated. For Barker waveforms, the chip width and number are generated randomly. All signals are impaired with white Gaussian noise using the awgn function with a random signal-to-noise ratio in the range of [–6, 30] dB. A frequency offset with a random carrier frequency in the range of [Fs/6, Fs/5] is applied to each signal using the comm.PhaseFrequencyOffset object. Lastly, each signal is passed through a multipath Rician fading channel, comm.RicianChannel.

The provided helper function helperGenerateRadarWaveforms creates and augments each modulation type.

[wav, modType] = helperGenerateRadarWaveforms();

Plot the Fourier transform for a few of the LFM waveforms to show the variances in the generated set.

idLFM = find(modType == "LFM",3);
nfft = 2^nextpow2(length(wav{1}));
f = (0:(nfft/2-1))/nfft*100e6;

figure
subplot(1,3,1)
Z = fft(wav{idLFM(1)},nfft);
plot(f/1e6,abs(Z(1:nfft/2)))
xlabel('Frequency (MHz)');ylabel('Amplitude');axis square
subplot(1,3,2)
Z = fft(wav{idLFM(2)},nfft);
plot(f/1e6,abs(Z(1:nfft/2)))
xlabel('Frequency (MHz)');ylabel('Amplitude');axis square
subplot(1,3,3)
Z = fft(wav{idLFM(3)},nfft);
plot(f/1e6,abs(Z(1:nfft/2)))
xlabel('Frequency (MHz)');ylabel('Amplitude');axis square

Feature Extraction Using Wigner-Ville Distribution

To improve the classification performance of machine learning algorithms, a common approach is to input extracted features in place of the original signal data. The features provide a representation of the input data that makes it easier for a classification algorithm to discriminate across the classes. The Wigner-Ville distribution represents a time-frequency view of the original data that is useful for time varying signals. The high resolution and locality in both time and frequency provide good features for the identification of similar modulation types. Use the wvd function to compute the smoothed pseudo WVD for each of the modulation types.

figure
subplot(1,3,1)
wvd(wav{find(modType == "Rect",1)},100e6,'smoothedPseudo')
axis square; colorbar off; title('Rect')
subplot(1,3,2)
wvd(wav{find(modType == "LFM",1)},100e6,'smoothedPseudo')
axis square; colorbar off; title('LFM')
subplot(1,3,3)
wvd(wav{find(modType == "Barker",1)},100e6,'smoothedPseudo')
axis square; colorbar off; title('Barker')

The helper function helperGenerateTFDfiles computes the smoothed pseudo WVD for each input signal, downsamples the result to size 227-by-227, and saves it as an image. A folder TFDDatabase in the current directory is created to store all of distributions for each modulation type. Each modulation type is placed in a sub-folder with name according to its type. This process can take several minutes because of the large database size and the complexity of the wvd algorithm.

helperGenerateTFDfiles(wav,modType,100e6)

Create an image datastore object for the created folder to manage the image files used for training the deep learning network. This step avoids having to load all images into memory. Specify the label source to be folder names. This assigns each signal's modulation type according to the folder name.

folders = fullfile('TFDDatabase',{'Rect','LFM','Barker'});
imds = imageDatastore(folders,...
    'FileExtensions','.png','LabelSource','foldernames','ReadFcn',@readTFDForAlexNet);

The network is trained with 80% of the data and tested on with 10%. The remaining 10% is used for validation. Use the splitEachLabel function to divide the imageDatastore into training, validation, and testing sets.

[imdsTrain,imdsTest,imdsValidation] = splitEachLabel(imds,0.8,0.1);

Set Up Deep Learning Network

Before the deep learning network can be trained, define the network architecture. This example utilizes transfer learning for AlexNet, which is a deep CNN created for image classification. Transfer learning is the process of retraining an existing neural network to classify new targets. This network accepts image input of size 227-by-227-by-3. Prior to input to the network, the custom read function readTFDForAlexNet transforms the two-dimensional time-frequency distribution to an RGB image of the correct size. AlexNet performs classification of 1000 categories in its default configuration. To tune AlexNet for our needs, we must modify the final three classification layers so they classify only the three radar modulation types.

Specified a fullyConnectedLayer with an output size of three, one for each of the modulation types. The final softmaxLayer and classificationLayer are used to make the final output classifications.

numClasses = 3;
net = alexnet;
layersTransfer = net.Layers(1:end-3);
layers = [
    layersTransfer
    fullyConnectedLayer(numClasses,'WeightLearnRateFactor',20,'BiasLearnRateFactor',20)
    softmaxLayer
    classificationLayer]
layers = 
  25x1 Layer array with layers:

     1   'data'    Image Input                   227x227x3 images with 'zerocenter' normalization
     2   'conv1'   Convolution                   96 11x11x3 convolutions with stride [4  4] and padding [0  0  0  0]
     3   'relu1'   ReLU                          ReLU
     4   'norm1'   Cross Channel Normalization   cross channel normalization with 5 channels per element
     5   'pool1'   Max Pooling                   3x3 max pooling with stride [2  2] and padding [0  0  0  0]
     6   'conv2'   Grouped Convolution           2 groups of 128 5x5x48 convolutions with stride [1  1] and padding [2  2  2  2]
     7   'relu2'   ReLU                          ReLU
     8   'norm2'   Cross Channel Normalization   cross channel normalization with 5 channels per element
     9   'pool2'   Max Pooling                   3x3 max pooling with stride [2  2] and padding [0  0  0  0]
    10   'conv3'   Convolution                   384 3x3x256 convolutions with stride [1  1] and padding [1  1  1  1]
    11   'relu3'   ReLU                          ReLU
    12   'conv4'   Grouped Convolution           2 groups of 192 3x3x192 convolutions with stride [1  1] and padding [1  1  1  1]
    13   'relu4'   ReLU                          ReLU
    14   'conv5'   Grouped Convolution           2 groups of 128 3x3x192 convolutions with stride [1  1] and padding [1  1  1  1]
    15   'relu5'   ReLU                          ReLU
    16   'pool5'   Max Pooling                   3x3 max pooling with stride [2  2] and padding [0  0  0  0]
    17   'fc6'     Fully Connected               4096 fully connected layer
    18   'relu6'   ReLU                          ReLU
    19   'drop6'   Dropout                       50% dropout
    20   'fc7'     Fully Connected               4096 fully connected layer
    21   'relu7'   ReLU                          ReLU
    22   'drop7'   Dropout                       50% dropout
    23   ''        Fully Connected               3 fully connected layer
    24   ''        Softmax                       softmax
    25   ''        Classification Output         crossentropyex

Choose options for the training process that ensures good network performance. Refer to the trainingOptions documentation for a description of each option.

options = trainingOptions('sgdm', ...
    'MiniBatchSize',128, ...
    'MaxEpochs',5, ...
    'InitialLearnRate',1e-3, ...
    'Shuffle','every-epoch', ...
    'Verbose',false, ...
    'Plots','training-progress',...
    'ValidationData',imdsValidation);

Train the Network

Use the trainNetwork command to train the created CNN. Because of the dataset's large size, the process may take several minutes. If your machine has a GPU and Parallel Computing Toolbox™, then MATLAB automatically uses the GPU for training. Otherwise, it uses the CPU. The training accuracy plots in the figure show the progress of the network's learning across all iterations. On the three radar modulation types, the network classifies almost 100% of the training signals correctly.

trainedNet = trainNetwork(shuffle(imdsTrain),layers,options);

Evaluate Performance on Radar Waveforms

Use the trained network to classify the testing data using the classify command. A confusion matrix is one method to visualize classification performance. Use the confusionchart command to calculate and visualize the classification accuracy. For the three modulation types input to the network, almost all of the phase coded, LFM and rectangular waveforms were correctly identified by the network.

predicted = classify(trainedNet,imdsTest);
figure
confusionchart(predicted,imdsTest.Labels,'Normalization','column-normalized');

Generate Communications Waveforms and Extract Features

The frequency spectrum of a radar classification system must compete with other transmitted sources. Let's see how the created network extends to incorporate other simulated modulation types. Another MathWorks example, Modulation Classification with Deep Learning (Communications Toolbox), performs modulation classification of several different modulation types using Communications Toolbox™. The helper function helperGenerateCommsWaveforms generates and augments a subset of the modulation types used in that example. Since the WVD loses phase information, a subset of only the amplitude and frequency modulation types are used.

See the example link for an in-depth description of the workflow necessary for digital and analog modulation classification and the techniques used to create these waveforms. For each modulation type, use wvd to extract time-frequency features and visualize.

[wav, modType] = helperGenerateCommsWaveforms();

figure
subplot(2,3,1)
wvd(wav{find(modType == "GFSK",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('GFSK')
subplot(2,3,2)
wvd(wav{find(modType == "CPFSK",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('CPFSK')
subplot(2,3,3)
wvd(wav{find(modType == "B-FM",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('B-FM')
subplot(2,3,4)
wvd(wav{find(modType == "SSB-AM",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('SSB-AM')
subplot(2,3,5)
wvd(wav{find(modType == "DSB-AM",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('DSB-AM')

Use the helper function helperGenerateTFDfiles again to compute the smoothed pseudo WVD for each input signal. Create an image datastore object to manage the image files of all modulation types.

helperGenerateTFDfiles(wav,modType,200e3)
folders = fullfile('TFDDatabase',{'Rect','LFM','Barker','GFSK','CPFSK','B-FM','SSB-AM','DSB-AM'});
imds = imageDatastore(folders,...
    'FileExtensions','.png','LabelSource','foldernames','ReadFcn',@readTFDForAlexNet);

Again, divide the data into a training set, a validation set, and a testing set using the splitEachLabel function.

[imdsTrain,imdsTest,imdsValidation] = splitEachLabel(imds,0.8,0.1);

Adjust Deep Learning Network Architecture

Previously, the network architecture was set up to classify three modulation types. This must be updated to allow classification of all eight modulation types of both radar and communication signals. This is a similar process as before, with the exception that the fullyConnectedLayer now requires an output size of eight.

numClasses = 8;
net = alexnet;
layersTransfer = net.Layers(1:end-3);
layers = [
    layersTransfer
    fullyConnectedLayer(numClasses,'WeightLearnRateFactor',20,'BiasLearnRateFactor',20)
    softmaxLayer
    classificationLayer];

options = trainingOptions('sgdm', ...
    'MiniBatchSize',256, ...
    'MaxEpochs',5, ...
    'InitialLearnRate',1e-3, ...
    'Shuffle','every-epoch', ...
    'Verbose',false, ...
    'Plots','training-progress',...
    'ValidationData',imdsValidation);

Use the trainNetwork command to train the created CNN. For all modulation types, the training converges with an accuracy of about 95% correct classification.

trainedNet = trainNetwork(shuffle(imdsTrain),layers,options);

Evaluate Performance on All Signals

Use the classify command to classify the signals held aside for testing. Again, visualize the performance using confusionchart.

predicted = classify(trainedNet,imdsTest);
figure;
confusionchart(predicted,imdsTest.Labels,'Normalization','column-normalized');

For the eight modulation types input to the network, over 99% of B-FM, CPFSK, GFSK, Barker, Rectangular, and LFM modulation types were correctly classified. On average, over 85% of AM signals were correctly identified. From the confusion matrix, a high percentage of DSB-AM signals were misclassified as SSB-AM and SSB-AM as DSB-AM.

Let us investigate a few of these misclassifications to gain insight into the network's learning process. Use the readimage function on the image datastore to extract from the test dataset a single image from each class. The displayed WVD visually looks very similar. Since DSB-AM and SSB-AM signals have a very similar signature, this explains in part the network's difficulty in correctly classifying these two types. Further signal processing could make the differences between these two modulation types clearer to the network and result in improved classification.

DSB_DSB = readimage(imdsTest,find((imdsTest.Labels == 'DSB-AM') & (predicted == 'DSB-AM'),1));
DSB_SSB = readimage(imdsTest,find((imdsTest.Labels == 'DSB-AM') & (predicted == 'SSB-AM'),1));
SSB_DSB = readimage(imdsTest,find((imdsTest.Labels == 'SSB-AM') & (predicted == 'DSB-AM'),1));
SSB_SSB = readimage(imdsTest,find((imdsTest.Labels == 'SSB-AM') & (predicted == 'SSB-AM'),1));

figure
subplot(2,2,1)
imagesc(DSB_DSB(:,:,1))
axis square; title({'Actual Class: DSB-AM','Predicted Class: DSB-AM'})
subplot(2,2,2)
imagesc(DSB_SSB(:,:,1))
axis square; title({'Actual Class: DSB-AM','Predicted Class: SSB-AM'})
subplot(2,2,3)
imagesc(SSB_DSB(:,:,1))
axis square; title({'Actual Class: SSB-AM','Predicted Class: DSB-AM'})
subplot(2,2,4)
imagesc(SSB_SSB(:,:,1))
axis square; title({'Actual Class: SSB-AM','Predicted Class: SSB-AM'})

This example showed how radar and communications modulation types can be classified by using time-frequency signal processing techniques and a deep learning network. Further efforts for additional improvement could be investigated by utilizing time-frequency analysis available in Wavelet Toolbox™ and additional Fourier analysis available in Signal Processing Toolbox™.

References

[1] Brynolfsson, Johan, and Maria Sandsten. "Classification of one-dimensional non-stationary signals using the Wigner-Ville distribution in convolutional neural networks." 25th European Signal Processing Conference (EUSIPCO). IEEE, 2017.

[2] Liu, Xiaoyu, Diyu Yang, and Aly El Gamal. "Deep neural network architectures for modulation classification." 51st Asilomar Conference on Signals, Systems and Computers. 2017.

[3] `Wang, Chao, Jian Wang, and Xudong Zhang. "Automatic radar waveform recognition based on time-frequency analysis and convolutional neural network." IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2017.