Specify Simulation Options in Reinforcement Learning Designer
To configure the simulation of an agent in the Reinforcement Learning Designer app, specify simulation options on the Simulate tab.
Specify Basic Options
On the Simulate tab, you can specify the following basic simulation options.
|Number of Episodes||Number of episodes to simulate the agent, specified as a positive integer. At the start of each simulation episode, the app resets the environment.|
|Max Episode Length||Number of steps to run the simulation, specified as a positive integer. In general, you define episode termination conditions in the environment. This value is the maximum number of steps to run in the simulation if those termination conditions are not met.|
|Stop on Error||Select this option to stop simulation when an error occurs during an episode.|
Specify Parallel Simulation Options
To simulate your agent using parallel computing, on the Simulate tab, click . Simulating agents using parallel computing requires Parallel Computing Toolbox™ software. For more information, see Train Agents Using Parallel Computing and GPUs.
To specify options for parallel simulation, select Use Parallel > Parallel training options.
In the Parallel Simulation Options dialog box, you can specify the following training options.
|Transfer workspace variables to workers|
Select this option to send model and workspace variables to parallel workers. When you select this option, the parallel pool client (the process that starts the training) sends variables used in models and defined in the MATLAB® workspace to the workers.
|Random seed for workers|
Randomizer initialization for workers, specified as one of the following values.
|Files to attach to parallel pool||Additional files to attach to the parallel pool. Specify names of files in the current working directory, with one name on each line.|
|Worker setup function||Function to run before simulation starts, specified as the name of a function having no input arguments. This function is run once per worker before simulation begins. Write this function to perform any processing that you need prior to training.|
|Worker cleanup function||Function to run after simulation ends, specified as the name of a function having no input arguments. You can write this function to clean up the workspace or perform other processing after training terminates.|
The following figure shows an example parallel training configuration the following files and functions.
Data file attached to the parallel pool —
Worker setup function —
Worker cleanup function —
- Design and Train Agent Using Reinforcement Learning Designer
- Specify Training Options in Reinforcement Learning Designer