Main Content

getActor

Extract actor from reinforcement learning agent

Since R2019a

Description

example

actor = getActor(agent) returns the actor object from the specified reinforcement learning agent.

Examples

collapse all

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.

load("DoubleIntegDDPG.mat","agent") 

Obtain the actor function approximator from the agent.

actor = getActor(agent);

Obtain the learnable parameters from the actor.

params = getLearnableParameters(actor)
params=2×1 cell array
    {[-15.4601 -7.2076]}
    {[               0]}

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,"UniformOutput",false);

Set the parameter values of the actor to the new modified values.

actor = setLearnableParameters(actor,modifiedParams);

Set the actor in the agent to the new modified actor.

setActor(agent,actor);

Display the new parameter values.

getLearnableParameters(getActor(agent))
ans=2×1 cell array
    {[-30.9201 -14.4153]}
    {[                0]}

Create an environment with a continuous action space and obtain its observation and action specifications. For this example, load the environment used in the example Compare DDPG Agent to LQR Controller.

Load the predefined environment.

env = rlPredefinedEnv("DoubleIntegrator-Continuous");

Obtain observation and action specifications.

obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);

Create a PPO agent from the environment observation and action specifications. This agent uses default deep neural networks for its actor and critic.

agent = rlPPOAgent(obsInfo,actInfo);

To modify the deep neural networks within a reinforcement learning agent, you must first extract the actor and critic function approximators.

actor = getActor(agent);
critic = getCritic(agent);

Extract the deep neural networks from both the actor and critic function approximators.

actorNet = getModel(actor);
criticNet = getModel(critic);

The networks are dlnetwork objects. To view them using the plot function, you must convert them to layerGraph objects.

For example, view the actor network.

plot(layerGraph(actorNet))

Figure contains an axes object. The axes object contains an object of type graphplot.

To validate a network, use analyzeNetwork. For example, validate the critic network.

analyzeNetwork(criticNet)

You can modify the actor and critic networks and save them back to the agent. To modify the networks, you can use the Deep Network Designer app. To open the app for each network, use the following commands.

deepNetworkDesigner(criticNet)
deepNetworkDesigner(actorNet)

In Deep Network Designer, modify the networks. For example, you can add additional layers to your network. When you modify the networks, do not change the input and output layers of the networks returned by getModel. For more information on building networks, see Build Networks with Deep Network Designer.

To validate the modified network in Deep Network Designer, you must click on Analyze, under the Analysis section. To export the modified network structures to the MATLAB® workspace, generate code for creating the new networks and run this code from the command line. Do not use the exporting option in Deep Network Designer. For an example that shows how to generate and run code, see Create DQN Agent Using Deep Network Designer and Train Using Image Observations.

For this example, the code for creating the modified actor and critic networks is in the createModifiedNetworks helper script.

createModifiedNetworks

Each of the modified networks includes an additional fullyConnectedLayer and reluLayer in their main common path. View the modified actor network.

plot(layerGraph(modifiedActorNet))

Figure contains an axes object. The axes object contains an object of type graphplot.

After exporting the networks, insert the networks into the actor and critic function approximators.

actor = setModel(actor,modifiedActorNet);
critic = setModel(critic,modifiedCriticNet);

Finally, insert the modified actor and critic function approximators into the actor and critic objects.

agent = setActor(agent,actor);
agent = setCritic(agent,critic);

Input Arguments

collapse all

Reinforcement learning agent that contains an actor, specified as one of the following:

Note

if agent is an rlMBPOAgent object, to extract the actor, use getActor(agent.BaseAgent).

Output Arguments

collapse all

Actor object, returned as one of the following:

  • rlContinuousDeterministicActor object — Returned when agent is an rlDDPGAgent or rlTD3Agent object

  • rlDiscreteCategoricalActor object — Returned when agent is an rlACAgent, rlPGAgent, rlPPOAgent, rlTRPOAgent or rlSACAgent object for an environment with a discrete action space.

  • rlContinuousGaussianActor object — Returned when agent is an rlACAgent, rlPGAgent, rlPPOAgent, rlTRPOAgent or rlSACAgent object for an environment with a continuous action space.

Version History

Introduced in R2019a