# gru

Gated recurrent unit

## Syntax

## Description

The gated recurrent unit (GRU) operation allows a network to learn dependencies between time steps in time series and sequence data.

**Note**

This function applies the deep learning GRU operation to `dlarray`

data. If
you want to apply an GRU operation within a `layerGraph`

object
or `Layer`

array, use
the following layer:

applies a gated recurrent unit (GRU) calculation to input `Y`

= gru(`X`

,`H0`

,`weights`

,`recurrentWeights`

,`bias`

)`X`

using the
initial hidden state `H0`

, and parameters `weights`

,
`recurrentWeights`

, and `bias`

. The input
`X`

must be a formatted `dlarray`

. The output
`Y`

is a formatted `dlarray`

with the same dimension
format as `X`

, except for any `'S'`

dimensions.

The `gru`

function updates the hidden state using the hyperbolic
tangent function (tanh) as the state activation function. The `gru`

function uses the sigmoid function given by $$\sigma (x)={(1+{e}^{-x})}^{-1}$$ as the gate activation function.

`[`

also returns the hidden state after the GRU operation.`Y`

,`hiddenState`

] = gru(`X`

,`H0`

,`weights`

,`recurrentWeights`

,`bias`

)

`[___] = gru(___,'DataFormat',`

also specifies the dimension format `FMT`

)`FMT`

when `X`

is not
a formatted `dlarray`

. The output `Y`

is an unformatted
`dlarray`

with the same dimension order as `X`

, except
for any `'S'`

dimensions.

## Examples

## Input Arguments

## Output Arguments

## Limitations

`functionToLayerGraph`

does not support the`gru`

function. If you use`functionToLayerGraph`

with a function that contains the`gru`

operation, the resulting`LayerGraph`

contains placeholder layers.

## More About

## References

[1] Cho, Kyunghyun, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." *arXiv preprint arXiv:1406.1078* (2014).

## Extended Capabilities

## Version History

**Introduced in R2020a**

## See Also

`dlarray`

| `fullyconnect`

| `softmax`

| `dlgradient`

| `dlfeval`

| `lstm`

| `attention`