# deep.gpu.deterministicAlgorithms

Set determinism of deep learning operations on the GPU to get reproducible results

*Since R2024b*

## Syntax

## Description

returns the current determinism state of GPU deep learning operations as
`previousState`

= deep.gpu.deterministicAlgorithms(`newState`

)`1`

(`true`

) or `0`

(`false`

) before changing the state according to the input
`newState`

. If `newState`

is `1`

(`true`

), then subsequent calls to GPU deep learning operations use only
deterministic algorithms. This function requires Parallel Computing Toolbox™.

returns the current determinism state of GPU deep learning operations as
`state`

= deep.gpu.deterministicAlgorithms`1`

(`true`

) or `0`

(`false`

). If `state`

is `1`

(`true`

), then subsequent calls to GPU deep learning operations use only
deterministic algorithms.

**Tip**

Use this function only if you require your GPU deep learning operations to be exactly reproducible because using only deterministic algorithms can slow down computations.

This function only controls the algorithms selected by the NVIDIA

^{®}cuDNN library. To enable reproducibility, you must also control other sources of randomness, for example, by setting the random number generator and seed. In most cases, setting the random number generator and seed on the CPU and GPU using the`rng`

and`gpurng`

(Parallel Computing Toolbox) functions, respectively, is sufficient. For more information, see Limitations and Tips.

## Examples

## Input Arguments

## Limitations

This function only affects deep learning computations on the GPU in MATLAB

^{®}. It does not affect:Deep learning operations on a CPU, for example, training a network using the

`trainnet`

function with the`ExecutionEnvironment`

training option set to`"cpu"`

or`"parallel-cpu"`

.Deep learning code generated using GPU Coder™ or MATLAB Coder™.

Predictions using the

`predict`

and`minibatchpredict`

functions when the`Acceleration`

option is set to`"mex"`

.Deep learning operations in Simulink

^{®}.

When using only deterministic algorithms, computations can be slower.

As the NVIDIA algorithm selection depends on several factors, including the hardware and the current GPU memory usage, your workflow might not give identical results on different GPUs.

Training a network is not reproducible, even if you use this function and set the random number generator and seed, if:

You use the

`trainnet`

function with the`PreprocessingEnvironment`

training option set to`"background"`

or`"parallel"`

.You train a network using a

`minibatchqueue`

object with the`PreprocessingEnvironment`

property set to`"background"`

or`"parallel"`

.

## Tips

Sources of randomness in your deep learning workflow can include:

Learnable parameter and state value initialization — Initialization functions that sample from a distribution using random numbers generated on the CPU.

Data shuffling during training — If the

`Shuffle`

training option is set to`"once"`

or`"every-epoch"`

, the`trainnet`

function shuffles the training data using random numbers generated on the CPU.Dropout layers — If you are training using a CPU, dropout layers generate random numbers on the CPU. If you are training using a GPU, dropout layers generate random numbers on the GPU.

Custom layers — The code inside your function layer dictates where the random numbers are generated. For example,

`rand(10)`

generates random numbers on the CPU while`rand(10,"gpuArray")`

generates random numbers on the GPU.

If you are performing deep learning operations in parallel, for example, by using the

`trainnet`

function with the`ExecutionEnvironment`

option set to`"parallel-gpu"`

, then set the random number generator and seed on each of the workers. For more information, see Control Random Number Streams on Workers (Parallel Computing Toolbox).You can change the default algorithm and seed for the random number generator from the MATLAB Preferences window. To ensure that

`rng("default")`

uses the same algorithm and seed in different MATLAB sessions, ensure that the sessions have the same default algorithm and seed preferences. Alternatively, you can avoid using the preferences by specifying the seed and algorithm. For example, call`rng("twister")`

to use the Mersenne Twister algorithm with a seed of 0.

## Version History

**Introduced in R2024b**

## See Also

`rng`

| `gpurng`

(Parallel Computing Toolbox) | `trainnet`

### Topics

- Generate Random Numbers That Are Repeatable
- Random Number Streams on a GPU (Parallel Computing Toolbox)
- Control Random Number Streams on Workers (Parallel Computing Toolbox)
- Reproduce Network Training on a GPU