Main Content

setLearnableParameters

Set learnable parameter values of agent, function approximator, or policy object

Description

Agent

setLearnableParameters(agent,params) sets the learnable parameter values specified in params in the specified agent.

agent = setLearnableParameters(agent,params) also returns the new agent as an output argument.

Actor or Critic

updatedFcnAppx = setLearnableParameters(fcnAppx,params) returns a new actor or critic function approximator object, updatedFcnAppx, with the same structure as the original function approximator object, fcnAppx, and the learnable parameter values specified in params. This syntax is equivalent to fcnAppx.Learnables=params.

example

Policy

updatedPolicy = setLearnableParameters(policy,params) returns a new policy object, updatedPolicy, with the same structure as the original function approximator object, fcnAppx, and the learnable parameter values specified in params.

Examples

collapse all

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.

load("DoubleIntegDDPG.mat","agent") 

Obtain the critic from the agent.

critic = getCritic(agent);

For approximator objects, you can access the Learnables property using dot notation.

First, display the parameters.

critic.Learnables{1}
ans = 
  1×6 single dlarray

   -5.0017   -1.5513   -0.3424   -0.1116   -0.0506   -0.0047

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

critic.Learnables{1} = critic.Learnables{1}*2;

Display the new parameters.

critic.Learnables{1}
ans = 
  1×6 single dlarray

  -10.0034   -3.1026   -0.6848   -0.2232   -0.1011   -0.0094

Alternatively, you can use getLearnableParameters and setLearnableParameters.

First, obtain the learnable parameters from the critic.

params = getLearnableParameters(critic)
params=2×1 cell array
    {[-10.0034 -3.1026 -0.6848 -0.2232 -0.1011 -0.0094]}
    {[                                               0]}

Modify the parameter values. For this example, simply divide all of the parameters by 2.

modifiedParams = cellfun(@(x) x/2,params,"UniformOutput",false);

Set the parameter values of the critic to the new modified values.

critic = setLearnableParameters(critic,modifiedParams);

Set the critic in the agent to the new modified critic.

setCritic(agent,critic);

Display the new parameter values.

getLearnableParameters(getCritic(agent))
ans=2×1 cell array
    {[-5.0017 -1.5513 -0.3424 -0.1116 -0.0506 -0.0047]}
    {[                                              0]}

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.

load("DoubleIntegDDPG.mat","agent") 

Obtain the actor function approximator from the agent.

actor = getActor(agent);

For approximator objects, you can access the Learnables property using dot notation.

First, display the parameters.

actor.Learnables{1}
ans = 
  1×2 single dlarray

  -15.4663   -7.2746

Modify the parameter values. For this example, simply divide all of the parameters by 2.

actor.Learnables{1} = actor.Learnables{1}/2;

Display the new parameters.

actor.Learnables{1}
ans = 
  1×2 single dlarray

   -7.7331   -3.6373

Alternatively, you can use getLearnableParameters and setLearnableParameters.

Obtain the learnable parameters from the actor.

params = getLearnableParameters(actor)
params=2×1 cell array
    {[-7.7331 -3.6373]}
    {[              0]}

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,"UniformOutput",false);

Set the parameter values of the actor to the new modified values.

actor = setLearnableParameters(actor,modifiedParams);

Set the actor in the agent to the new modified actor.

setActor(agent,actor);

Display the new parameter values.

getLearnableParameters(getActor(agent))
ans=2×1 cell array
    {[-15.4663 -7.2746]}
    {[               0]}

Input Arguments

collapse all

Agent, specified as one of the following reinforcement learning agent objects:

Note

agent is a handle object, so a function that does not return it as output argument, such as train, can still update it. For more information about handle objects, see Handle Object Behavior.

For more information on reinforcement learning agents, see Reinforcement Learning Agents.

Example: agent = rlPPOAgent(rlNumericSpec([2 1]),rlNumericSpec([1 1])) creates the default rlPPOAgent object agent for an environment with an observation channel carrying a continuous two-element vector and an action channel carrying a continuous scalar.

Function approximator, specified as one of the following objects:

To create an actor or critic function object, use one of the following methods.

  • Create a function object directly.

  • Obtain the existing critic from an agent using getCritic.

  • Obtain the existing actor from an agent using getActor.

Example: critic = rlValueFunction(dlnetwork([featureInputLayer(2) fullyConnectedLayer(10) reluLayer fullyConnectedLayer(1)]),rlNumericSpec([2 1])); creates the rlValueFunction object critic.

For more information on reinforcement learning policies, see Create Actors, Critics, and Policy Objects.

Example: policy = getExplorationPolicy(rlPPOAgent(rlNumericSpec([2 1]),rlNumericSpec([1 1]))) extracts the object that implements the exploration policy from a default PPO agent and assigns it to the variable policy.

Learnable parameter values for the representation object, specified as a cell array. The parameters in params must be compatible with the structure and parameterization of the agent, function approximator, or policy object passed as a first argument.

To obtain a cell array of learnable parameter values from an existing agent, function approximator, or policy object, which you can then modify, use the getLearnableParameters function.

Example: {[1.2 -2.4 3.1 0 0.1 -0.9]}

Output Arguments

collapse all

Updated actor or critic object, returned as a function approximator object of the same type as fcnAppx. Except of its new learnable parameter values, updatedFcnAppx is the same as fcnAppx.

Updated reinforcement learning policy, returned as a policy object of the same type as policy. Apart from the learnable parameter values, updatedPolicy is the same as policy.

Updated agent, returned as an agent object. Note that agent is a handle object. Therefore its parameters are updated by setLearnableParameters whether agent is returned as an output argument or not. For more information about handle objects, see Handle Object Behavior.

Tips

  • You can also obtain and modify the learnable parameters function approximation objects such as actors and critics by accessing their Learnables property, using dot notation.

Version History

Introduced in R2019a

expand all