Main Content

rlAgentInitializationOptions

Options for initializing reinforcement learning agents

Description

Use the rlAgentInitializationOptions object to specify initialization options for an agent. To create an agent, use the specific agent creation function, such as rlACAgent.

Creation

Description

initOpts = rlAgentInitializationOptions returns a default options object for initializing a reinforcement learning agent that supports default networks. Use the initialization options to specify agent initialization parameters, such as the number of units for each hidden layer of the agent networks and whether to use a recurrent neural network.

example

initOpts = rlAgentInitializationOptions(Name,Value) creates an initialization options object and sets its properties by using one or more name-value pair arguments.

Properties

expand all

Number of units in each hidden fully connected layer of the agent networks, except for the fully connected layer just before the network output, specified as a positive integer. The value you set also applies to any LSTM layers.

Example: 'NumHiddenUnit',64

Flag to use recurrent neural network, specified as a logical.

If you set UseRNN to true, during agent creation the software inserts a recurrent LSTM layer with the output mode set to sequence in the output path of the agent networks. Policy gradient and actor-critic agents do not support recurrent neural networks. For more information on LSTM, see Long Short-Term Memory Networks.

Example: 'UseRNN',true

Object Functions

rlACAgentActor-critic reinforcement learning agent
rlPGAgentPolicy gradient reinforcement learning agent
rlDDPGAgentDeep deterministic policy gradient reinforcement learning agent
rlDQNAgentDeep Q-network reinforcement learning agent
rlPPOAgentProximal policy optimization reinforcement learning agent
rlTD3AgentTwin-delayed deep deterministic policy gradient reinforcement learning agent
rlSACAgentSoft actor-critic reinforcement learning agent
rlTRPOAgentTrust region policy optimization reinforcement learning agent

Examples

collapse all

Create an agent initialization options object, specifying the number of hidden neurons and use of a recurrent neural network.

initOpts = rlAgentInitializationOptions('NumHiddenUnit',64,'UseRNN',true)
initOpts = 
  rlAgentInitializationOptions with properties:

    NumHiddenUnit: 64
           UseRNN: 1

You can modify the options using dot notation. For example, set the agent sample time to 0.5.

initOpts.NumHiddenUnit = 128
initOpts = 
  rlAgentInitializationOptions with properties:

    NumHiddenUnit: 128
           UseRNN: 1

Introduced in R2020b