Main Content


Visualize network features using deep dream


I = deepDreamImage(net,layer,channels) returns an array of images that strongly activate the channels channels within the network net of the layer with numeric index or name given by layer. These images highlight the features learned by a network.


I = deepDreamImage(net,layer,channels,Name,Value) returns an image with additional options specified by one or more Name,Value pair arguments.


collapse all

Load a pretrained AlexNet network.

net = alexnet;

Visualize the first 25 features learned by the first convolutional layer ('conv1') using deepDreamImage. Set 'PyramidLevels' to 1 so that the images are not scaled.

layer = 'conv1';
channels = 1:25;

I = deepDreamImage(net,layer,channels, ...
    'PyramidLevels',1, ...

for i = 1:25

Input Arguments

collapse all

Trained network, specified as a SeriesNetwork object or a DAGNetwork object. You can get a trained network by importing a pretrained network or by training your own network using the trainNetwork function. For more information about pretrained networks, see Pretrained Deep Neural Networks.

deepDreamImage only supports networks with an image input layer.

Layer to visualize, specified as a positive integer, a character vector, or a string scalar. If net is a DAGNetwork object, specify layer as a character vector or string scalar only. Specify layer as the index or the name of the layer you want to visualize the activations of. To visualize classification layer features, select the last fully connected layer before the classification layer.


Selecting ReLU or dropout layers for visualization may not produce useful images because of the effect that these layers have on the network gradients.

Queried channels, specified as scalar or vector of channel indices. If channels is a vector, the layer activations for each channel are optimized independently. The possible choices for channels depend on the selected layer. For convolutional layers, the NumFilters property specifies the number of output channels. For fully connected layers, the OutputSize property specifies the number of output channels.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: deepDreamImage(net,layer,channels,'NumIterations',100,'ExecutionEnvironment','gpu') generates images using 100 iterations per pyramid level and uses the GPU.

Image to initialize Deep Dream. Use this syntax to see how an image is modified to maximize network layer activations. The minimum height and width of the initial image depend on all the layers up to and including the selected layer:

  • For layers towards the end of the network, the initial image must be at least the same height and width as the image input layer.

  • For layers towards the beginning of the network, the height and width of the initial image can be smaller than the image input layer. However, it must be large enough to produce a scalar output at the selected layer.

  • The number of channels of the initial image must match the number of channels in the image input layer of the network.

If you do not specify an initial image, the software uses a random image with pixels drawn from a standard normal distribution. See also 'PyramidLevels'.

Number of multi-resolution image pyramid levels to use to generate the output image, specified as a positive integer. Increase the number of pyramid levels to produce larger output images at the expense of additional computation. To produce an image of the same size as the initial image, set the number of levels to 1.

Example: 'PyramidLevels',3

Scale between each pyramid level, specified as a scalar with value > 1. Reduce the pyramid scale to incorporate fine grain details into the output image. Adjusting the pyramid scale can help generate more informative images for layers at the beginning of the network.

Example: 'PyramidScale',1.4

Number of iterations per pyramid level, specified as a positive integer. Increase the number of iterations to produce more detailed images at the expense of additional computation.

Example: 'NumIterations',10

Type of scaling to apply to output image, specified as the comma-separated pair consisting of 'OutputScaling' and one of the following:

'linear'Scale output pixel values in the interval [0,1]. The output image corresponding to each layer channel, I(:,:,:,channel), is scaled independently.
'none'Disable output scaling.

Scaling the pixel values can cause the network to misclassify the output image. If you want to classify the output image, set the 'OutputScaling' value to 'none'.

Example: 'OutputScaling','linear'

Indicator to display progress information in the command window, specified as the comma-separated pair consisting of 'Verbose' and either 1 (true) or 0 (false). The displayed information includes the pyramid level, iteration, and the activation strength.

Example: 'Verbose',0

Data Types: logical

Hardware resource, specified as one of these values:

  • "auto" — Use a GPU if one is available. Otherwise, use the CPU.

  • "gpu" — Use the GPU. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information about supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

  • "cpu" — Use the CPU.

Output Arguments

collapse all

Output image, specified by a sequence of grayscale or truecolor (RGB) images stored in a 4–D array. Images are concatenated along the fourth dimension of I such that the image that maximizes the output of channels(k) is I(:,:,:,k). You can display the output image using imshow (Image Processing Toolbox).


This function implements a version of deep dream that uses a multi-resolution image pyramid and Laplacian Pyramid Gradient Normalization to generate high-resolution images. For more information on Laplacian Pyramid Gradient Normalization, see this blog post: DeepDreaming with TensorFlow.

When you train a neural network using the trainnet or trainNetwork functions, or when you use prediction or validation functions with DAGNetwork and SeriesNetwork objects, the software performs these computations using single-precision, floating-point arithmetic. Functions for prediction and validation include predict, classify, and activations. The software uses single-precision arithmetic when you train neural networks using both CPUs and GPUs.


[1] DeepDreaming with TensorFlow.

Version History

Introduced in R2017a