Get function approximator model from actor or critic
Modify Deep Neural Networks in Reinforcement Learning Agent
Create an environment with a continuous action space and obtain its observation and action specifications. For this example, load the environment used in the example Train DDPG Agent to Control Double Integrator System.
Load the predefined environment.
env = rlPredefinedEnv("DoubleIntegrator-Continuous");
Obtain observation and action specifications.
obsInfo = getObservationInfo(env); actInfo = getActionInfo(env);
Create a PPO agent from the environment observation and action specifications. This agent uses default deep neural networks for its actor and critic.
agent = rlPPOAgent(obsInfo,actInfo);
To modify the deep neural networks within a reinforcement learning agent, you must first extract the actor and critic function approximators.
actor = getActor(agent); critic = getCritic(agent);
Extract the deep neural networks from both the actor and critic function approximators.
actorNet = getModel(actor); criticNet = getModel(critic);
The networks are
dlnetwork objects. To view them using the
plot function, you must convert them to
For example, view the actor network.
To validate a network, use
analyzeNetwork. For example, validate the critic network.
You can modify the actor and critic networks and save them back to the agent. To modify the networks, you can use the Deep Network Designer app. To open the app for each network, use the following commands.
In Deep Network Designer, modify the networks. For example, you can add additional layers to your network. When you modify the networks, do not change the input and output layers of the networks returned by
getModel. For more information on building networks, see Build Networks with Deep Network Designer.
To validate the modified network in Deep Network Designer, you must click on Analyze for dlnetwork, under the Analysis section. To export the modified network structures to the MATLAB® workspace, generate code for creating the new networks and run this code from the command line. Do not use the exporting option in Deep Network Designer. For an example that shows how to generate and run code, see Create DQN Agent Using Deep Network Designer and Train Using Image Observations.
For this example, the code for creating the modified actor and critic networks is in the
createModifiedNetworks helper script.
Each of the modified networks includes an additional
reluLayer in their main common path. View the modified actor network.
After exporting the networks, insert the networks into the actor and critic function approximators.
actor = setModel(actor,modifiedActorNet); critic = setModel(critic,modifiedCriticNet);
Finally, insert the modified actor and critic function approximators into the actor and critic objects.
agent = setActor(agent,actor); agent = setCritic(agent,critic);
fcnAppx — Actor or critic function object
rlValueFunction object |
rlQValueFunction object |
rlVectorQValueFunction object |
rlContinuousDeterministicActor object |
rlDiscreteCategoricalActor object |
Actor or critic function object, specified as one of the following:
rlValueFunctionobject — Value function critic
rlQValueFunctionobject — Q-value function critic
rlVectorQValueFunctionobject — Multi-output Q-value function critic with a discrete action space
rlContinuousDeterministicActorobject — Deterministic policy actor with a continuous action space
rlDiscreteCategoricalActor— Stochastic policy actor with a discrete action space
rlContinuousGaussianActorobject — Stochastic policy actor with a continuous action space
To create an actor or critic function object, use one of the following methods.
For agents with more than one critic, such as TD3 and SAC agents, you must call
getModel for each critic representation individually, rather
getModel for the array returned by
critics = getCritic(myTD3Agent); criticNet1 = getModel(critics(1)); criticNet2 = getModel(critics(2));
model — Function approximation model
dlnetwork object |
rlTable object | 1-by-2 cell array
Version HistoryIntroduced in R2020b
getModel now uses approximator objects instead of representation objects
Using representation objects to create actors and critics for reinforcement learning
agents is no longer recommended. Therefore,
getModel now uses function
approximator objects instead.
getModel returns a
Starting from R2021b, built-in agents use
dlnetwork objects as actor and critic representations, so
getModel returns a
Due to numerical differences in the network calculations, previously trained agents might behave differently. If this happens, you can retrain your agents.
To use Deep Learning Toolbox™ functions that do not support
dlnetwork, you must convert the network to
layerGraph. For example, to use