REINFORCE Policy Gradient (PG) Agent
The REINFORCE policy gradient (PG) algorithm is an on-policy reinforcement learning method for environments with a discrete or continuous action space. The REINFORCE policy gradient agent (also referred to sometimes as Monte Carlo policy gradient or vanilla policy gradient) is a policy-based reinforcement learning agent that uses the REINFORCE algorithm to search for an optimal stochastic policy (a stochastic policy that maximizes the expected discounted cumulative long-term reward). As this algorithm belongs to the class of Monte Carlo methods, the agent does not learn during an episode but only after an episode is finished.
To reduce the variance of the parameter updates you can use a baseline value function critic that estimates the expected discounted cumulative long-term reward. Note that such baseline does not fully act as a critic as it is not used for bootstrapping (that it is not used to update the value estimate of a state based on the value estimates of subsequent states).
Note
The PG agent does not generally have any functional advantage with respect to more recent agents such as PPO and is provided mostly for educational purposes.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
In Reinforcement Learning Toolbox™, a REINFORCE actor-critic agent is implemented by an rlPGAgent
object.
Policy gradient agents can be trained in environments with the following observation and action spaces.
Observation Space | Action Space |
---|---|
Discrete or continuous | Discrete or continuous |
Policy gradient agents use the following actor and critic.
Critic (if a baseline is used) | Actor |
---|---|
Value function critic V(S), which you
create using | Stochastic policy actor π(S), which
you create using |
During training, a PG agent:
Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.
Completes a full training episode using the current policy before learning from the experience and updating the policy parameters.
If the UseExplorationPolicy
option of the agent is set to
false
, the action with maximum likelihood is always used in sim
and generatePolicyFunction
. As a result, the simulated agent and generated policy
behave deterministically.
If the UseExplorationPolicy
is set to true
the
agent selects its actions by sampling its probability distribution. As a result the policy is
stochastic and the agent explores its observation space.
This option affects only simulation and deployment; it does not affect training.
Actor and Critic Function Approximators
Policy gradient agents represent the policy using an actor function approximator π(A|S;θ) with parameters θ. The actor outputs the conditional probability of taking each action A when in state S as one of the following:
Discrete action space — The probability of taking each discrete action. The sum of these probabilities across all actions is 1.
Continuous action space — The mean and standard deviation of the Gaussian probability distribution for each continuous action.
To reduce the variance of the parameter updates during gradient estimation, REINFORCE policy gradient agents can use a baseline value function, which is estimated using a critic function approximator, V(S;ϕ) with parameters ϕ. The critic computes the value function for a given observation state.
For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.
During training, the agent tunes the parameter values in θ. After training, the parameters remain at their tuned value and the trained actor function approximator is stored in π(A|S).
Agent Creation
You can create a REINFORCE policy gradient agent with default actor and critic based on the observation and action specifications from the environment. To do so, perform the following steps.
Create observation specifications for your environment. If you already have an environment object, you can obtain these specifications using
getObservationInfo
.Create action specifications for your environment. If you already have an environment object, you can obtain these specifications using
getActionInfo
.If needed, specify the number of neurons in each learnable layer of the default network or whether to use an LSTM layer. To do so, create an agent initialization option object using
rlAgentInitializationOptions
.If needed, specify agent options using an
rlPGAgentOptions
object.Create the agent using an
rlPGAgent
object.
Alternatively, you can create actor and critic and use these objects to create your agent. In this case, ensure that the input and output dimensions of the actor and critic match the corresponding action and observation specifications of the environment.
Create an actor using an
rlDiscreteCategoricalActor
(for a for discrete action space) orrlContinuousGaussianActor
(for a continuous action space) object.If you are using a baseline function, create a critic using an
rlValueFunction
object.Specify agent options using the
rlPGAgentOptions
object (alternatively, you can skip this step and then modify the agent options later using dot notation).Create the agent using an
rlPGAgent
object.
For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.
Training Algorithm
PG agents use the REINFORCE (also known as Monte Carlo policy gradient) algorithm either
with or without a baseline. To configure the training algorithm, specify options using an
rlPGAgentOptions
object.
REINFORCE Algorithm
Initialize the actor π(S;θ) with random parameter values in θ.
For each training episode, generate the episode experience by following actor policy π(S). To select an action, the actor generates probabilities for each action in the action space, then the agent randomly selects an action based on the probability distribution. The agent takes actions until it reaches the terminal state ST. The episode experience consists of the sequence
Here, St is a state observation, At is an action taken from that state, St+1 is the next state, and Rt+1 is the reward received for moving from St to St+1.
For each state in the episode sequence, that is, for t = 1, 2, …, T-1, calculate the return Gt, which is the discounted future reward.
Accumulate the gradients for the actor network by following the gradient of the policy to maximize the expected discounted cumulative long-term reward. If the
EntropyLossWeight
option is greater than zero, then additional gradients are accumulated to minimize the entropy loss function.Update the actor parameters by applying the gradients.
Here, α is the learning rate of the actor. Specify the learning rate when you create the actor by setting the
LearnRate
option in therlActorOptimizerOptions
property within the agent options object. For simplicity, this step shows a gradient update using basic stochastic gradient descent. The actual gradient update method depends on the optimizer you specify in therlOptimizerOptions
object assigned to therlActorOptimizerOptions
property.Repeat steps 2 through 5 for each training episode until training is complete.
REINFORCE with Baseline Algorithm
Initialize the actor π(S;θ) with random parameter values in θ.
Initialize the critic V(S;ϕ) with random parameter values in ϕ.
For each training episode, generate the episode experience by following the actor policy π(S). The episode experience consists of the sequence
For t = 1, 2, …, T:
Calculate the return Gt, which is the discounted future reward.
Compute the advantage function δt using the baseline value function estimate from the critic.
Accumulate the gradients for the critic network.
Accumulate the gradients for the actor network. If the
EntropyLossWeight
option is greater than zero, then additional gradients are accumulated to minimize the entropy loss function.Update the critic parameters ϕ.
Here, β is the learning rate of the critic. Specify the learning rate when you create the critic by setting the
LearnRate
option in therlCriticOptimizerOptions
property within the agent options object.Update the actor parameters θ.
Repeat steps 3 through 8 for each training episode until training is complete.
For simplicity, the actor and critic updates in this algorithm show a gradient update
using basic stochastic gradient descent. The actual gradient update method depends on the
optimizer you specify in the rlOptimizerOptions
object assigned to the
rlCriticOptimizerOptions
property.
References
[1] Williams, Ronald J. “Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning.” Machine Learning 8, no. 3–4 (May 1992): 229–56. https://doi.org/10.1007/BF00992696.
[2] Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning. Cambridge, Mass: The MIT Press, 2018.