Main Content

Soft Actor-Critic (SAC) Agents

The soft actor-critic (SAC) algorithm is a model-free, online, off-policy, actor-critic reinforcement learning method. The SAC algorithm computes an optimal policy that maximizes both the long-term expected reward and the entropy of the policy. The policy entropy is a measure of policy uncertainty given the state. A higher entropy value promotes more exploration. Maximizing both the expected cumulative long term reward and the entropy helps to balance between exploitation and exploration of the environment.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

The implementation of the SAC agent in Reinforcement Learning Toolbox™ software uses two Q-value function critics, which prevents overestimation of the value function. Other implementations of the SAC algorithm use an additional value function critic.

SAC agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Discrete or continuousContinuous

SAC agents use the following actor and critic.

CriticsActor

Q-value function critics Q(S,A), which you create using rlQValueFunction

Stochastic policy actor π(S), which you create using rlContinuousGaussianActor

During training, a SAC agent:

  • Updates the actor and critic properties at regular intervals during learning.

  • Estimates the mean and standard deviation of a Gaussian probability distribution for the continuous action space, then randomly selects actions based on the distribution.

  • Updates an entropy weight term that balances the expected return and the entropy of the policy.

  • Stores past experience using a circular experience buffer. The agent updates the actor and critic using a mini-batch of experiences randomly sampled from the buffer.

If the UseExplorationPolicy option of the agent is set to false the action with maximum likelihood is always used in sim and generatePolicyFunction. As a result, the simulated agent and generated policy behave deterministically.

If the UseExplorationPolicy is set to true the agent selects its actions by sampling its probability distribution. As a result the policy is stochastic and the agent explores its observation space.

This option affects only simulation and deployment; it does not affect training.

Actor and Critic Function Approximators

To estimate the policy and value function, a SAC agent maintains the following function approximators.

  • Stochastic actor π(A|S;θ) — The actor, with parameters θ, outputs the mean ans standard deviation of conditional Gaussian probability of taking each continuous action A when in state S.

  • One or two Q-value critics Qk(S,A;ϕk) — The critics, each with parameters ϕk, take observation S and action A as inputs and return the corresponding expectation of the value function, which includes both the long-term reward and entropy.

  • One or two target critics Qtk(S,A;ϕtk) — To improve the stability of the optimization, the agent periodically sets the target critic parameters ϕtk to the latest corresponding critic parameter values. The number of target critics matches the number of critics.

When you use two critics, Q1(S,A;ϕ1) and Q2(S,A;ϕ2), each critic can have different structures. When the critics have the same structure, they must have different initial parameter values.

Each critic Qk(S,A;ϕk) and corresponding target critic Qtk(S,A;ϕtk) must have the same structure and parameterization.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

During training, the agent tunes the parameter values in θ. After training, the parameters remain at their tuned value and the trained actor function approximator is stored in π(A|S).

Action Generation

The actor in a SAC agent generates mean and standard deviation outputs. To select an action, the actor first randomly selects an unbounded action from a Gaussian distribution with these parameters. During training, the SAC agent uses the unbounded probability distribution to compute the entropy of the policy for the given observation.

If the SAC agent needs to generate bounded actions, the actor generates bounded actions by applying tanh and scaling operations to the unbounded action sampled from the Gaussian distribution.

Generation of a bounded action from an unbounded action randomly selected from a Gaussian distribution

Agent Creation

You can create and train SAC agents at the MATLAB® command line or using the Reinforcement Learning Designer app. For more information on creating agents using Reinforcement Learning Designer, see Create Agents Using Reinforcement Learning Designer.

At the command line, you can create a SAC agent with default actor and critic based on the observation and action specifications from the environment. To do so, perform the following steps.

  1. Create observation specifications for your environment. If you already have an environment interface object, you can obtain these specifications using getObservationInfo.

  2. Create action specifications for your environment. If you already have an environment interface object, you can obtain these specifications using getActionInfo.

  3. If needed, specify the number of neurons in each learnable layer or whether to use a recurrent neural network. To do so, create an agent initialization option object using rlAgentInitializationOptions.

  4. If needed, specify agent options using an rlSACAgentOptions object.

  5. Create the agent using an rlSACAgent object.

Alternatively, you can create actor and critics and use these objects to create your agent. In this case, ensure that the input and output dimensions of the actor and critic match the corresponding action and observation specifications of the environment.

  1. Create a stochastic actor using an rlContinuousGaussianActor object. For SAC agents, in order to properly scale the mean values to the desired action range, the actor network must not contain a tanhLayer and scalingLayer as last two layers in the output path for the mean values. However, in order to ensure non-negativity of the standard deviation values, the actor network must contain a reluLayer as a last layer in the output path for the standard deviation values.

  2. Create one or two critics using rlQValueFunction objects.

  3. Specify agent options using an rlSACAgentOptions object.

  4. Create the agent using an rlSACAgent object.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

Training Algorithm

SAC agents use the following training algorithm, in which they periodically update their actor and critic models and entropy weight. To configure the training algorithm, specify options using an rlSACAgentOptions object. Here, K = 2 is the number of critics and k is the critic index.

  • Initialize each critic Qk(S,A;ϕk) with random parameter values ϕk, and initialize each target critic with the same random parameter values: ϕtk=ϕk.

  • Initialize the actor π(S;θ) with random parameter values θ.

  • Perform a warm start by taking a sequence of actions following the initial random policy in π(S). For each action, store the experience in the experience buffer. To specify the number of warm up actions, use the NumWarmStartSteps option.

  • For each training time step:

    1. For the current observation S, select action A using the policy in π(S;θ).

    2. Execute action A. Observe the reward R and next observation S'.

    3. Store the experience (S,A,R,S') in the experience buffer.

    4. Sample a random mini-batch of M experiences (Si,Ai,Ri,S'i) from the experience buffer. To specify M, use the MiniBatchSize option.

    5. Every DC time steps, update the parameters of each critic by minimizing the loss Lk across all sampled experiences. To specify DC, use the LearningFrequency option.

      Lk=12Mi=1M(yiQk(Si,Ai;ϕk))2

      If S'i is a terminal state, the value function target yi is equal to the experience reward Ri. Otherwise, the value function target is the sum of Ri, the minimum discounted future reward from the critics, and the weighted entropy.

      yi=Ri+γ*mink(Qtk(Si',Ai';ϕtk))αlnπ(Si';θ)

      Here:

      • A'i is the bounded action derived from the unbounded output of the actor π(S'i).

      • γ is the discount factor, which you specify using the DiscountFactor option.

      • αlnπ(S;θ) is the weighted policy entropy for the bounded output of the actor when in state S. α is the entropy loss weight, which you specify using the EntropyLossWeight option.

      If you specify a value of NumStepsToLookAhead equal to N, then the N-step return (which adds the rewards of the following N steps and the discounted estimated value of the state that caused the N-th reward) is used to calculate the target yi.

    6. Every DA time steps, update the actor parameters by minimizing the following objective function. To set DA, use both the LearningFrequency and the PolicyUpdateFrequency options.

      Jπ=1Mi=1M(mink(Qk(Si,Ai;ϕk))+αlnπ(Si;θ))

    7. Every DA time steps, also update the entropy weight by minimizing the following loss function.

      Lα=1Mi=1M(αlnπ(Si;θ)α)

      Here, is the target entropy, which you specify using the EntropyWeightOptions.TargetEntropy option.

    8. Every DT steps, update the target critics depending on the target update method. To specify DT, use the TargetUpdateFrequency option. For more information, see Target Update Methods.

    9. Repeat steps 4 through 8 NG times, where NG is the number of gradient steps, which you specify using the NumGradientStepsPerUpdate option.

Target Update Methods

SAC agents update their target critic parameters using one of the following target update methods.

  • Smoothing — Update the target critic parameters at every time step using smoothing factor τ. To specify the smoothing factor, use the TargetSmoothFactor option.

    ϕtk=τϕk+(1τ)ϕtk

  • Periodic — Update the target critic parameters periodically without smoothing (TargetSmoothFactor = 1). To specify the update period, use the TargetUpdateFrequency parameter.

    ϕtk=ϕk

  • Periodic smoothing — Update the target parameters periodically with smoothing.

To configure the target update method, create an rlSACAgentOptions object, and set the TargetUpdateFrequency and TargetSmoothFactor parameters as shown in the following table.

Update MethodTargetUpdateFrequencyTargetSmoothFactor
Smoothing (default)1Less than 1
PeriodicGreater than 11
Periodic smoothingGreater than 1Less than 1

References

[1] Haarnoja, Tuomas, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, et al. "Soft Actor-Critic Algorithms and Application." Preprint, submitted January 29, 2019. https://arxiv.org/abs/1812.05905.

See Also

Objects

Related Examples

More About