Main Content

bilstmLayer

Bidirectional long short-term memory (BiLSTM) layer for recurrent neural network (RNN)

Description

A bidirectional LSTM (BiLSTM) layer is an RNN layer that learns bidirectional long-term dependencies between time steps of time-series or sequence data. These dependencies can be useful when you want the RNN to learn from the complete time series at each time step.

Creation

Description

layer = bilstmLayer(numHiddenUnits) creates a bidirectional LSTM layer and sets the NumHiddenUnits property.

layer = bilstmLayer(numHiddenUnits,Name,Value) sets additional OutputMode, Activations, State, Parameters and Initialization, Learning Rate and Regularization, and Name properties using one or more name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in quotes.

example

Properties

expand all

BiLSTM

Number of hidden units (also known as the hidden size), specified as a positive integer.

The number of hidden units corresponds to the amount of information that the layer remembers between time steps (the hidden state). The hidden state can contain information from all the previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer can overfit to the training data. The hidden state does not limit the number of time steps that the layer processes in an iteration.

The layer outputs data with NumHiddenUnits channels.

To set this property, use the numHiddenUnits argument when you create the BiLSTMLayer object. After you create a BiLSTMLayer object, this property is read-only.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Output mode, specified as one of these values:

  • "sequence" — Output the complete sequence.

  • "last" — Output the last time step of the sequence.

The BiLSTMLayer object stores this property as a character vector.

To set this property, use the corresponding name-value argument when you create the BiLSTMLayer object. After you create a BiLSTMLayer object, this property is read-only.

This property is read-only.

Flag for state inputs to the layer, specified as 0 (false) or 1 (true).

If the HasStateInputs property is 0 (false), then the layer has one input with the name "in", which corresponds to the input data. In this case, the layer uses the HiddenState and CellState properties for the layer operation.

If the HasStateInputs property is 1 (true), then the layer has three inputs with the names "in", "hidden", and "cell", which correspond to the input data, hidden state, and cell state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs is 1 (true), then the HiddenState and CellState properties must be empty.

This property is read-only.

Flag for state outputs from the layer, specified as 0 (false) or 1 (true).

If the HasStateOutputs property is 0 (false), then the layer has one output with the name "out", which corresponds to the output data.

If the HasStateOutputs property is 1 (true), then the layer has three outputs with the names "out", "hidden", and "cell", which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values that it computes.

This property is read-only.

Input size, specified as a positive integer or "auto". If InputSize is "auto", then the software automatically assigns the input size at training time.

If InputSize is "auto", then the BiLSTMLayer object stores this property as a character vector.

Data Types: double | char | string

Activations

Activation function to update the cell and hidden state, specified as one of these values:

  • "tanh" — Use the hyperbolic tangent function (tanh).

  • "softsign" — Use the softsign function softsign(x)=x1+|x|.

  • "relu" (since R2024b) — Use the rectified linear unit (ReLU) function ReLU(x)={x,x>00,x0.

The software uses this option as the function σc in the calculations to update the cell and hidden state.

The BiLSTMLayer object stores this property as a character vector.

Activation function to apply to the gates, specified as one of these values:

  • "sigmoid" — Use the sigmoid function, σ(x)=(1+ex)1.

  • "hard-sigmoid" — Use the hard sigmoid function,

    σ(x)={00.2x+0.51if x<2.5if2.5x2.5if x>2.5.

The software uses this option as the function σg in the calculations for the layer gates.

The BiLSTMLayer object stores this property as a character vector.

To set this property, use the corresponding name-value argument when you create the BiLSTMLayer object. After you create a BiLSTMLayer object, this property is read-only.

State

Cell state to use in the layer operation, specified as a 2*NumHiddenUnits-by-1 numeric vector. This value corresponds to the initial cell state when data is passed to the layer.

After setting this property manually, calls to the resetState function set the cell state to this value.

If HasStateInputs is true, then the CellState property must be empty.

Data Types: single | double

Hidden state to use in the layer operation, specified as a 2*NumHiddenUnits-by-1 numeric vector. This value corresponds to the initial hidden state when data is passed to the layer.

After setting this property manually, calls to the resetState function set the hidden state to this value.

If HasStateInputs is true, then the HiddenState property must be empty.

Data Types: single | double

Parameters and Initialization

Function to initialize the input weights, specified as one of the following:

  • 'glorot' – Initialize the input weights with the Glorot initializer [1] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance 2/(InputSize + numOut), where numOut = 8*NumHiddenUnits.

  • 'he' – Initialize the input weights with the He initializer [2]. The He initializer samples from a normal distribution with zero mean and variance 2/InputSize.

  • 'orthogonal' – Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [3]

  • 'narrow-normal' – Initialize the input weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

  • 'zeros' – Initialize the input weights with zeros.

  • 'ones' – Initialize the input weights with ones.

  • Function handle – Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the form weights = func(sz), where sz is the size of the input weights.

The layer only initializes the input weights when the InputWeights property is empty.

Data Types: char | string | function_handle

Function to initialize the recurrent weights, specified as one of the following:

  • 'orthogonal' – Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [3]

  • 'glorot' – Initialize the recurrent weights with the Glorot initializer [1] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance 2/(numIn + numOut), where numIn = NumHiddenUnits and numOut = 8*NumHiddenUnits.

  • 'he' – Initialize the recurrent weights with the He initializer [2]. The He initializer samples from a normal distribution with zero mean and variance 2/NumHiddenUnits.

  • 'narrow-normal' – Initialize the recurrent weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

  • 'zeros' – Initialize the recurrent weights with zeros.

  • 'ones' – Initialize the recurrent weights with ones.

  • Function handle – Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the form weights = func(sz), where sz is the size of the recurrent weights.

The layer only initializes the recurrent weights when the RecurrentWeights property is empty.

Data Types: char | string | function_handle

Function to initialize the bias, specified as one of these values:

  • "unit-forget-gate" — Initialize the forget gate bias with ones and the remaining biases with zeros.

  • "narrow-normal" — Initialize the bias by independently sampling from a normal distribution with zero mean and a standard deviation of 0.01.

  • "ones" — Initialize the bias with ones.

  • Function handle — Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form bias = func(sz), where sz is the size of the bias.

The layer only initializes the bias when the Bias property is empty.

The BiLSTMLayer object stores this property as a character vector or a function handle.

Data Types: char | string | function_handle

Input weights, specified as a matrix.

The input weight matrix is a concatenation of the eight input weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

The input weights are learnable parameters. When you train a neural network using the trainnet function, if InputWeights is nonempty, then the software uses the InputWeights property as the initial value. If InputWeights is empty, then the software uses the initializer specified by InputWeightsInitializer.

At training time, InputWeights is an 8*NumHiddenUnits-by-InputSize matrix.

Data Types: single | double

Recurrent weights, specified as a matrix.

The recurrent weight matrix is a concatenation of the eight recurrent weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

The recurrent weights are learnable parameters. When you train an RNN using the trainnet function, if RecurrentWeights is nonempty, then the software uses the RecurrentWeights property as the initial value. If RecurrentWeights is empty, then the software uses the initializer specified by RecurrentWeightsInitializer.

At training time, RecurrentWeights is an 8*NumHiddenUnits-by-NumHiddenUnits matrix.

Data Types: single | double

Layer biases, specified as a numeric vector.

The bias vector is a concatenation of the eight bias vectors for the components (gates) in the bidirectional LSTM layer. The eight vectors are concatenated vertically in the following order:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

The layer biases are learnable parameters. When you train a neural network, if Bias is nonempty, then the trainnet function uses the Bias property as the initial value. If Bias is empty, then software uses the initializer specified by BiasInitializer.

At training time, Bias is an 8*NumHiddenUnits-by-1 numeric vector.

Data Types: single | double

Learning Rate and Regularization

Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if InputWeightsLearnRateFactor is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify with the trainingOptions function.

To control the value of the learning rate factor for the four individual matrices in InputWeights, assign a 1-by-8 vector, where the entries correspond to the learning rate factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if RecurrentWeightsLearnRateFactor is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

To control the value of the learn rate for the four individual matrices in RecurrentWeights, assign a 1-by-8 vector, where the entries correspond to the learning rate factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

To control the value of the learning rate factor for the four individual matrices in Bias, assign a 1-by-8 vector, where the entries correspond to the learning rate factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if InputWeightsL2Factor is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings you specify using the trainingOptions function.

To control the value of the L2 regularization factor for the four individual matrices in InputWeights, assign a 1-by-8 vector, where the entries correspond to the L2 regularization factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-8 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if RecurrentWeightsL2Factor is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings you specify using the trainingOptions function.

To control the value of the L2 regularization factor for the four individual matrices in RecurrentWeights, assign a 1-by-8 vector, where the entries correspond to the L2 regularization factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the biases, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. The software determines the global L2 regularization factor based on the settings you specify using the trainingOptions function.

To control the value of the L2 regularization factor for the four individual matrices in Bias, assign a 1-by-8 vector, where the entries correspond to the L2 regularization factor of the following:

  1. Input gate (Forward)

  2. Forget gate (Forward)

  3. Cell candidate (Forward)

  4. Output gate (Forward)

  5. Input gate (Backward)

  6. Forget gate (Backward)

  7. Cell candidate (Backward)

  8. Output gate (Backward)

To specify the same value for all the matrices, specify a nonnegative scalar.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Layer

Layer name, specified as a character vector or string scalar. For Layer array input, the trainnet and dlnetwork functions automatically assign names to layers with the name "".

The BiLSTMLayer object stores this property as a character vector.

Data Types: char | string

This property is read-only.

Number of inputs to the layer.

If the HasStateInputs property is 0 (false), then the layer has one input with the name "in", which corresponds to the input data. In this case, the layer uses the HiddenState and CellState properties for the layer operation.

If the HasStateInputs property is 1 (true), then the layer has three inputs with the names "in", "hidden", and "cell", which correspond to the input data, hidden state, and cell state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs is 1 (true), then the HiddenState and CellState properties must be empty.

Data Types: double

This property is read-only.

Input names of the layer.

If the HasStateInputs property is 0 (false), then the layer has one input with the name "in", which corresponds to the input data. In this case, the layer uses the HiddenState and CellState properties for the layer operation.

If the HasStateInputs property is 1 (true), then the layer has three inputs with the names "in", "hidden", and "cell", which correspond to the input data, hidden state, and cell state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs is 1 (true), then the HiddenState and CellState properties must be empty.

The BiLSTMLayer object stores this property as a cell array of character vectors.

This property is read-only.

Number of outputs to the layer.

If the HasStateOutputs property is 0 (false), then the layer has one output with the name "out", which corresponds to the output data.

If the HasStateOutputs property is 1 (true), then the layer has three outputs with the names "out", "hidden", and "cell", which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values that it computes.

Data Types: double

This property is read-only.

Output names of the layer.

If the HasStateOutputs property is 0 (false), then the layer has one output with the name "out", which corresponds to the output data.

If the HasStateOutputs property is 1 (true), then the layer has three outputs with the names "out", "hidden", and "cell", which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values that it computes.

The BiLSTMLayer object stores this property as a cell array of character vectors.

Examples

collapse all

Create a bidirectional LSTM layer with the name bilstm1 and 100 hidden units.

layer = bilstmLayer(100,Name="bilstm1")
layer = 
  BiLSTMLayer with properties:

                       Name: 'bilstm1'
                 InputNames: {'in'}
                OutputNames: {'out'}
                  NumInputs: 1
                 NumOutputs: 1
             HasStateInputs: 0
            HasStateOutputs: 0

   Hyperparameters
                  InputSize: 'auto'
             NumHiddenUnits: 100
                 OutputMode: 'sequence'
    StateActivationFunction: 'tanh'
     GateActivationFunction: 'sigmoid'

   Learnable Parameters
               InputWeights: []
           RecurrentWeights: []
                       Bias: []

   State Parameters
                HiddenState: []
                  CellState: []

Use properties method to see a list of all properties.

Include a bidirectional LSTM layer in a Layer array.

inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;

layers = [ ...
    sequenceInputLayer(inputSize)
    bilstmLayer(numHiddenUnits)
    fullyConnectedLayer(numClasses)
    softmaxLayer]
layers = 
  4x1 Layer array with layers:

     1   ''   Sequence Input    Sequence input with 12 dimensions
     2   ''   BiLSTM            BiLSTM with 100 hidden units
     3   ''   Fully Connected   9 fully connected layer
     4   ''   Softmax           softmax

Algorithms

expand all

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010. https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf

[2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In 2015 IEEE International Conference on Computer Vision (ICCV), 1026–34. Santiago, Chile: IEEE, 2015. https://doi.org/10.1109/ICCV.2015.123

[3] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact Solutions to the Nonlinear Dynamics of Learning in Deep Linear Neural Networks.” Preprint, submitted February 19, 2014. https://arxiv.org/abs/1312.6120.

Extended Capabilities

Version History

Introduced in R2018a

expand all