# gruLayer

Gated recurrent unit (GRU) layer

## Description

A GRU layer learns dependencies between time steps in time series and sequence data.

## Creation

### Syntax

``layer = gruLayer(numHiddenUnits)``
``layer = gruLayer(numHiddenUnits,Name,Value)``

### Description

example

````layer = gruLayer(numHiddenUnits)` creates a GRU layer and sets the `NumHiddenUnits` property.```
````layer = gruLayer(numHiddenUnits,Name,Value)` sets additional `OutputMode`, Activations, State, Parameters and Initialization, Learn Rate and Regularization, and `Name` properties using one or more name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in quotes.```

## Properties

expand all

### GRU

Number of hidden units (also known as the hidden size), specified as a positive integer.

The number of hidden units corresponds to the amount of information remembered between time steps (the hidden state). The hidden state can contain information from all previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer might overfit to the training data. This value can vary from a few dozen to a few thousand.

The hidden state does not limit the number of time steps that are processed in an iteration. To split your sequences into smaller sequences for training, use the `'SequenceLength'` option in `trainingOptions`.

Example: 200

Format of output, specified as one of the following:

• `'sequence'` – Output the complete sequence.

• `'last'` – Output the last time step of the sequence.

Reset gate mode, specified as one of the following:

• `'after-multiplication'` – Apply reset gate after matrix multiplication. This option is cuDNN compatible.

• `'before-multiplication'` – Apply reset gate before matrix multiplication.

• `'recurrent-bias-after-multiplication'` – Apply reset gate after matrix multiplication and use an additional set of bias terms for the recurrent weights.

Input size, specified as a positive integer or `'auto'`. If `InputSize` is `'auto'`, then the software automatically assigns the input size at training time.

Example: 100

### Activations

Activation function to update the hidden state, specified as one of the following:

• `'tanh'` – Use the hyperbolic tangent function (tanh).

• `'softsign'` – Use the softsign function $\text{softsign}\left(x\right)=\frac{x}{1+|x|}$.

The layer uses this option as the function ${\sigma }_{s}$ in the calculations to update the hidden state.

Activation function to apply to the gates, specified as one of the following:

• `'sigmoid'` – Use the sigmoid function $\sigma \left(x\right)={\left(1+{e}^{-x}\right)}^{-1}$.

• `'hard-sigmoid'` – Use the hard sigmoid function

The layer uses this option as the function ${\sigma }_{g}$ in the calculations for the layer gates.

### State

Initial value of the hidden state, specified as a `NumHiddenUnits`-by-1 numeric vector. This value corresponds to the hidden state at time step 0.

After setting this property, calls to the `resetState` function set the hidden state to this value.

### Parameters and Initialization

Function to initialize the input weights, specified as one of the following:

• `'glorot'` – Initialize the input weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance ```2/(InputSize + numOut)```, where `numOut = 3*NumHiddenUnits`.

• `'he'` – Initialize the input weights with the He initializer [3]. The He initializer samples from a normal distribution with zero mean and variance `2/InputSize`.

• `'orthogonal'` – Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [4]

• `'narrow-normal'` – Initialize the input weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

• `'zeros'` – Initialize the input weights with zeros.

• `'ones'` – Initialize the input weights with ones.

• Function handle – Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the form `weights = func(sz)`, where `sz` is the size of the input weights.

The layer only initializes the input weights when the `InputWeights` property is empty.

Data Types: `char` | `string` | `function_handle`

Function to initialize the recurrent weights, specified as one of the following:

• `'orthogonal'` – Initialize the recurrent weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [4]

• `'glorot'` – Initialize the recurrent weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance `2/(numIn + numOut)`, where `numIn = NumHiddenUnits` and ```numOut = 3*NumHiddenUnits```.

• `'he'` – Initialize the recurrent weights with the He initializer [3]. The He initializer samples from a normal distribution with zero mean and variance `2/NumHiddenUnits`.

• `'narrow-normal'` – Initialize the recurrent weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

• `'zeros'` – Initialize the recurrent weights with zeros.

• `'ones'` – Initialize the recurrent weights with ones.

• Function handle – Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the form `weights = func(sz)`, where `sz` is the size of the recurrent weights.

The layer only initializes the recurrent weights when the `RecurrentWeights` property is empty.

Data Types: `char` | `string` | `function_handle`

Function to initialize the bias, specified as one of the following:

• `zeros'` – Initialize the bias with zeros.

• `'narrow-normal'` – Initialize the bias by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

• `'ones'` – Initialize the bias with ones.

• Function handle – Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form ```bias = func(sz)```, where `sz` is the size of the bias.

The layer only initializes the bias when the `Bias` property is empty.

Data Types: `char` | `string` | `function_handle`

Input weights, specified as a matrix.

The input weight matrix is a concatenation of the three input weight matrices for the components in the GRU layer. The three matrices are concatenated vertically in the following order:

1. Reset gate

2. Update gate

3. Candidate state

The input weights are learnable parameters. When training a network, if `InputWeights` is nonempty, then `trainNetwork` uses the `InputWeights` property as the initial value. If `InputWeights` is empty, then `trainNetwork` uses the initializer specified by `InputWeightsInitializer`.

At training time, `InputWeights` is a `3*NumHiddenUnits`-by-`InputSize` matrix.

Recurrent weights, specified as a matrix.

The recurrent weight matrix is a concatenation of the three recurrent weight matrices for the components in the GRU layer. The three matrices are vertically concatenated in the following order:

1. Reset gate

2. Update gate

3. Candidate state

The recurrent weights are learnable parameters. When training a network, if `RecurrentWeights` is nonempty, then `trainNetwork` uses the `RecurrentWeights` property as the initial value. If `RecurrentWeights` is empty, then `trainNetwork` uses the initializer specified by `RecurrentWeightsInitializer`.

At training time `RecurrentWeights` is a `3*NumHiddenUnits`-by-`NumHiddenUnits` matrix.

Layer biases for the GRU layer, specified as a numeric vector.

If `ResetGateMode` is `'after-multiplication'` or `'before-multiplication'`, then the bias vector is a concatenation of three bias vectors for the components in the GRU layer. The three vectors are concatenated vertically in the following order:

1. Reset gate

2. Update gate

3. Candidate state

In this case, at training time, `Bias` is a `3*NumHiddenUnits`-by-1 numeric vector.

If `ResetGateMode` is `recurrent-bias-after-multiplication'`, then the bias vector is a concatenation of six bias vectors for the components in the GRU layer. The six vectors are concatenated vertically in the following order:

1. Reset gate

2. Update gate

3. Candidate state

4. Reset gate (recurrent bias)

5. Update gate (recurrent bias)

6. Candidate state (recurrent bias)

In this case, at training time, `Bias` is a `6*NumHiddenUnits`-by-1 numeric vector.

The layer biases are learnable parameters. When training a network, if `Bias` is nonempty, then `trainNetwork` uses the `Bias` property as the initial value. If `Bias` is empty, then `trainNetwork` uses the initializer specified by `BiasInitializer`.

### Learn Rate and Regularization

Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if `InputWeightsLearnRateFactor` is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

To control the value of the learning rate factor for the three individual matrices in `InputWeights`, specify a 1-by-3 vector. The entries of `InputWeightsLearnRateFactor` correspond to the learning rate factor of the following:

1. Reset gate

2. Update gate

3. Candidate state

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1]`

Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if `RecurrentWeightsLearnRateFactor` is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

To control the value of the learning rate factor for the three individual matrices in `RecurrentWeights`, specify a 1-by-3 vector. The entries of `RecurrentWeightsLearnRateFactor` correspond to the learning rate factor of the following:

1. Reset gate

2. Update gate

3. Candidate state

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1]`

Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if `BiasLearnRateFactor` is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

To control the value of the learning rate factor for the three individual vectors in `Bias`, specify a 1-by-3 vector. The entries of `BiasLearnRateFactor` correspond to the learning rate factor of the following:

1. Reset gate

2. Update gate

3. Candidate state

If `ResetGateMode` is `'recurrent-bias-after-multiplication'`, then the software uses the same vector for the recurrent bias vectors.

To specify the same value for all the vectors, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1]`

L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if `InputWeightsL2Factor` is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the `trainingOptions` function.

To control the value of the L2 regularization factor for the three individual matrices in `InputWeights`, specify a 1-by-3 vector. The entries of `InputWeightsL2Factor` correspond to the L2 regularization factor of the following:

1. Reset gate

2. Update gate

3. Candidate state

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1]`

L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if `RecurrentWeightsL2Factor` is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the `trainingOptions` function.

To control the value of the L2 regularization factor for the three individual matrices in `RecurrentWeights`, specify a 1-by-3 vector. The entries of `RecurrentWeightsL2Factor` correspond to the L2 regularization factor of the following:

1. Reset gate

2. Update gate

3. Candidate state

To specify the same value for all the matrices, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1]`

L2 regularization factor for the biases, specified as a nonnegative scalar or a 1-by-3 numeric vector.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if `BiasL2Factor` is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the `trainingOptions` function.

To control the value of the L2 regularization factor for the individual vectors in `Bias`, specify a 1-by-3 vector. The entries of `BiasL2Factor` correspond to the L2 regularization factor of the following:

1. Reset gate

2. Update gate

3. Candidate state

If `ResetGateMode` is `'recurrent-bias-after-multiplication'`, then the software uses the same vector for the recurrent bias vectors.

To specify the same value for all the vectors, specify a nonnegative scalar.

Example: `2`

Example: `[1 2 1]`

### Layer

Layer name, specified as a character vector or a string scalar. If `Name` is set to `''`, then the software automatically assigns a name at training time.

Data Types: `char` | `string`

Number of inputs of the layer. This layer accepts a single input only.

Data Types: `double`

Input names of the layer. This layer accepts a single input only.

Data Types: `cell`

Number of outputs of the layer. This layer has a single output only.

Data Types: `double`

Output names of the layer. This layer has a single output only.

Data Types: `cell`

## Examples

collapse all

Create a GRU layer with the name `'gru1'` and 100 hidden units.

`layer = gruLayer(100,'Name','gru1')`
```layer = GRULayer with properties: Name: 'gru1' Hyperparameters InputSize: 'auto' NumHiddenUnits: 100 OutputMode: 'sequence' StateActivationFunction: 'tanh' GateActivationFunction: 'sigmoid' ResetGateMode: 'after-multiplication' Learnable Parameters InputWeights: [] RecurrentWeights: [] Bias: [] State Parameters HiddenState: [] Show all properties ```

Include a GRU layer in a `Layer` array.

```inputSize = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(inputSize) gruLayer(numHiddenUnits) fullyConnectedLayer(numClasses) softmaxLayer classificationLayer]```
```layers = 5x1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' GRU GRU with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex ```

expand all

## References

[1] Cho, Kyunghyun, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).

[2] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010.

[3] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In Proceedings of the 2015 IEEE International Conference on Computer Vision, 1026–1034. Washington, DC: IEEE Computer Vision Society, 2015.

[4] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013).

## Extended Capabilities

Introduced in R2020a