Main Content

googlenet

GoogLeNet convolutional neural network

  • GoogLeNet network architecture

Description

GoogLeNet is a convolutional neural network that is 22 layers deep. You can load a pretrained version of the network trained on either the ImageNet [1] or Places365 [2] [3] data sets. The network trained on ImageNet classifies images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. The network trained on Places365 is similar to the network trained on ImageNet, but classifies images into 365 different place categories, such as field, park, runway, and lobby. These networks have learned different feature representations for a wide range of images. The pretrained networks both have an image input size of 224-by-224. For more pretrained networks in MATLAB®, see Pretrained Deep Neural Networks.

To classify new images using GoogLeNet, use classify. For an example, see Classify Image Using GoogLeNet.

You can retrain a GoogLeNet network to perform a new task using transfer learning. When performing transfer learning, the most common approach is to use networks pretrained on the ImageNet data set. If the new task is similar to classifying scenes, then using the network trained on Places-365 can give higher accuracies. For an example showing how to retrain GoogLeNet on a new classification task, see Train Deep Learning Network to Classify New Images

example

net = googlenet returns a GoogLeNet network trained on the ImageNet data set.

This function requires the Deep Learning Toolbox™ Model for GoogLeNet Network support package. If this support package is not installed, then the function provides a download link.

net = googlenet('Weights',weights) returns a GoogLeNet network trained on either the ImageNet or Places365 data set. The syntax googlenet('Weights','imagenet') (default) is equivalent to googlenet.

The network trained on ImageNet requires the Deep Learning Toolbox Model for GoogLeNet Network support package. The network trained on Places365 requires the Deep Learning Toolbox Model for Places365-GoogLeNet Network support package. If the required support package is not installed, then the function provides a download link.

lgraph = googlenet('Weights','none') returns the untrained GoogLeNet network architecture. The untrained model does not require the support package.

Examples

collapse all

Download and install the Deep Learning Toolbox Model for GoogLeNet Network support package.

Type googlenet at the command line.

googlenet

If the Deep Learning Toolbox Model for GoogLeNet Network support package is not installed, then the function provides a link to the required support package in the Add-On Explorer. To install the support package, click the link, and then click Install. Check that the installation is successful by typing googlenet at the command line. If the required support package is installed, then the function returns a DAGNetwork object.

googlenet
ans = 

  DAGNetwork with properties:

         Layers: [144×1 nnet.cnn.layer.Layer]
    Connections: [170×2 table]

Visualize the network using Deep Network Designer.

deepNetworkDesigner(googlenet)

Explore other pretrained neural networks in Deep Network Designer by clicking New.

Deep Network Designer start page showing available pretrained neural networks

If you need to download a neural network, pause on the desired neural network and click Install to open the Add-On Explorer.

Input Arguments

collapse all

Source of network parameters, specified as 'imagenet' ,'places365', or 'none'.

  • If weights equals 'imagenet', then the network has weights trained on the ImageNet data set.

  • If weights equals 'places365', then the network has weights trained on the Places365 data set.

  • If weights equals 'none', then the untrained network architecture is returned.

Example: 'places365'

Output Arguments

collapse all

Pretrained GoogLeNet convolutional neural network, returned as a DAGNetwork object.

Untrained GoogLeNet convolutional neural network architecture, returned as a LayerGraph object.

References

[1] ImageNet. http://www.image-net.org

[2] Zhou, Bolei, Aditya Khosla, Agata Lapedriza, Antonio Torralba, and Aude Oliva. "Places: An image database for deep scene understanding." arXiv preprint arXiv:1610.02055 (2016).

[3] Places. http://places2.csail.mit.edu/

[4] Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. "Going deeper with convolutions." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9. 2015.

Extended Capabilities

Version History

Introduced in R2017b