# dlarray

## Description

A deep learning array stores data with optional data format labels for custom training loops, and enables functions to compute and use derivatives through automatic differentiation.

**Tip**

For most deep learning tasks, you can use a pretrained neural network and adapt it to your own
data. For an example showing how to use transfer learning to retrain a convolutional neural
network to classify a new set of images, see Train Deep Learning Network to Classify New Images. Alternatively, you can create and train
neural networks from scratch using the `trainnet`

,
`trainNetwork`

, and `trainingOptions`

functions.

If the `trainingOptions`

function does not provide the training options that you need for your task, then you can create a custom training loop using automatic differentiation. To learn more, see Define Deep Learning Network for Custom Training Loops.

## Creation

### Description

### Input Arguments

### Output Arguments

## Usage

`dlarray`

data formats enable you to execute the functions in the following
table with assurance that the data has the appropriate shape.

Function | Operation | Validates Input Dimension | Affects Size of Input Dimension |
---|---|---|---|

`avgpool` | Compute the average of the input data over moving rectangular (or cuboidal)
spatial (`'S'` ) regions defined by a pool size parameter. | `'S'` | `'S'` |

`batchnorm` | Normalize the values contained in each channel (`'C'` ) of the
input data. | `'C'` | |

`crossentropy` | Compute the cross-entropy between estimates and target values, averaged by the
size of the batch (`'B'` ) dimension. | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Estimates and target arrays must
have the same sizes.) | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Output is an unformatted
scalar.) |

`dlconv` | Compute the deep learning convolution of the input data using an array of
filters, matching the number of spatial (`'S'` ) and (a function of
the) channel (`'C'` ) dimensions of the input, and adding a constant
bias. | `'S'` , `'C'` | `'S'` , `'C'` |

`dltranspconv` | Compute the deep learning transposed convolution of the input data using an array
of filters, matching the number of spatial (`'S'` ) and (a function of
the) channel (`'C'` ) dimensions of the input, and adding a constant
bias. | `'S'` , `'C'` | `'S'` , `'C'` |

`fullyconnect` | Compute a weighted sum of the input data and apply a bias for each batch
(`'B'` ) and time (`'T'` ) dimension. | `'S'` , `'C'` , `'U'` | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Output always has data format
`'CB'` , `'CT'` , or
`'CTB'` .) |

`gru` | Apply a gated recurrent unit calculation to the input data. | `'S'` , `'C'` , `'T'` | `'C'` |

`lstm` | Apply a long short-term memory calculation to the input data. | `'S'` , `'C'` , `'T'` | `'C'` |

`maxpool` | Compute the maximum of the input data over moving rectangular spatial
(`'S'` ) regions defined by a pool size parameter. | `'S'` | `'S'` |

`maxunpool` | Compute the unpooling operation over the spatial (`'S'` )
dimensions. | `'S'` | `'S'` |

`mse` | Compute the half mean squared error between estimates and target values, averaged
by the size of the batch (`'B'` ) dimension. | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Estimates and target arrays must
have the same sizes.) | `'S'` , `'C'` , `'B'` ,
`'T'` , `'U'` (Output is an unformatted
scalar.) |

`softmax` | Apply the softmax activation to each channel (`'C'` ) of the
input data. | `'C'` |

These functions require each dimension to have a label. You can specify the dimension
label format by providing the first input as a formatted `dlarray`

, or by using
the `'DataFormat'`

name-value argument of the function.

`dlarray`

enforces the dimension label ordering of
`'SCBTU'`

. This enforcement eliminates ambiguous semantics in operations
which implicitly match labels between inputs. `dlarray`

also enforces that the
dimension labels `'C'`

, `'B'`

, and `'T'`

can each appear at most once. The functions that use these dimension labels accept at most one
dimension for each label.

`dlarray`

provides functions for obtaining the data format associated with
a `dlarray`

(`dims`

), removing the
data format (`stripdims`

), and
obtaining the dimensions associated with specific dimension labels (`finddim`

).

For more information on how a `dlarray`

behaves with formats, see Notable dlarray Behaviors.

## Object Functions

`avgpool` | Pool data to average values over spatial dimensions |

`batchnorm` | Normalize data across all observations for each channel independently |

`crossentropy` | Cross-entropy loss for classification tasks |

`dims` | Dimension labels of `dlarray` |

`dlconv` | Deep learning convolution |

`dlgradient` | Compute gradients for custom training loops using automatic differentiation |

`dltranspconv` | Deep learning transposed convolution |

`extractdata` | Extract data from `dlarray` |

`finddim` | Find dimensions with specified label |

`fullyconnect` | Sum all weighted input data and apply a bias |

`gru` | Gated recurrent unit |

`leakyrelu` | Apply leaky rectified linear unit activation |

`lstm` | Long short-term memory |

`maxpool` | Pool data to maximum value |

`maxunpool` | Unpool the output of a maximum pooling operation |

`mse` | Half mean squared error |

`relu` | Apply rectified linear unit activation |

`sigmoid` | Apply sigmoid activation |

`softmax` | Apply softmax activation to channel dimension |

`stripdims` | Remove `dlarray` data format |

A `dlarray`

also allows functions for numeric, matrix, and other
operations. See the full list in List of Functions with dlarray Support.

## Examples

## Tips

A

`dlgradient`

call must be inside a function. To obtain a numeric value of a gradient, you must evaluate the function using`dlfeval`

, and the argument to the function must be a`dlarray`

. See Use Automatic Differentiation In Deep Learning Toolbox.To enable the correct evaluation of gradients,

`dlfeval`

must call functions that use only supported functions for`dlarray`

. See List of Functions with dlarray Support.

## Extended Capabilities

## Version History

**Introduced in R2019b**