Obtain observation data specifications from reinforcement learning environment, agent, or experience buffer
Extract Action and Observation Information from Reinforcement Learning Environment
Extract action and observation information that you can use to create other environments or agents.
The reinforcement learning environment for this example is the simple longitudinal dynamics for ego car and lead car. The training goal is to make the ego car travel at a set velocity while maintaining a safe distance from lead car by controlling longitudinal acceleration (and braking). This example uses the same vehicle model as the Adaptive Cruise Control System Using Model Predictive Control (Model Predictive Control Toolbox) example.
Open the model and create the reinforcement learning environment.
mdl = 'rlACCMdl'; open_system(mdl); agentblk = [mdl '/RL Agent']; % create the observation info obsInfo = rlNumericSpec([3 1],'LowerLimit',-inf*ones(3,1),'UpperLimit',inf*ones(3,1)); obsInfo.Name = 'observations'; obsInfo.Description = 'information on velocity error and ego velocity'; % action Info actInfo = rlNumericSpec([1 1],'LowerLimit',-3,'UpperLimit',2); actInfo.Name = 'acceleration'; % define environment env = rlSimulinkEnv(mdl,agentblk,obsInfo,actInfo)
env = SimulinkEnvWithAgent with properties: Model : rlACCMdl AgentBlock : rlACCMdl/RL Agent ResetFcn :  UseFastRestart : on
The reinforcement learning environment
env is a
SimulinkWithAgent object with the above properties.
Extract the action and observation information from the reinforcement learning environment
actInfoExt = getActionInfo(env)
actInfoExt = rlNumericSpec with properties: LowerLimit: -3 UpperLimit: 2 Name: "acceleration" Description: [0x0 string] Dimension: [1 1] DataType: "double"
obsInfoExt = getObservationInfo(env)
obsInfoExt = rlNumericSpec with properties: LowerLimit: [3x1 double] UpperLimit: [3x1 double] Name: "observations" Description: "information on velocity error and ego velocity" Dimension: [3 1] DataType: "double"
The action information contains acceleration values while the observation information contains the velocity and velocity error values of the ego vehicle.
env — Reinforcement learning environment
rlFunctionEnv object |
SimulinkEnvWithAgent object |
rlNeuralNetworkEnvironment object | predefined MATLAB environment object
Reinforcement learning environment from which to extract the observation information, specified as one of the following objects.
MATLAB® environment represented as one of the following objects.
Simulink® environment represented as a
For more information on reinforcement learning environments, see Create MATLAB Reinforcement Learning Environments and Create Simulink Reinforcement Learning Environments.
agent — Reinforcement learning agent
rlQAgent object |
rlSARSAAgent object |
rlDQNAgent object |
rlPGAgent object |
rlDDPGAgent object |
rlTD3Agent object |
rlACAgent object |
rlPPOAgent object |
rlTRPOAgent object |
rlSACAgent object |
Reinforcement learning agent from which to extract the observation information, specified as one of the following objects.
For more information on reinforcement learning agents, see Reinforcement Learning Agents.
obsInfo — Observation data specifications
rlNumericSpec objects | array of
Introduced in R2019a