Saving Trained RL Agent after Training
    22 views (last 30 days)
  
       Show older comments
    
    PB75
 on 29 Apr 2021
  
    
    
    
    
    Commented: Zaid Jaber
 on 14 Nov 2023
            Hi All,
I trained a RL agent, the environment output was acceptable, my plan was to initially validate the agent in the simulation after training finished with the following code.
As i was concerned that I would restart training on the agent when I ran the script to run the 'sim' function, my IsDone flag in the simulation was manually set to 1 (previously 0 to permit training) and additionally commented out the 'training' function.
%trainingStats = train(agentSS,env,trainingOpts)
rng(0) 
simOptions = rlSimulationOptions('MaxSteps',maxsteps);
experience = sim(env,agentSS,simOptions);
There was no ouput from the simulation, with no warnings, I then reset the IsDone flag back to 0, and reran the script, now the ouput was 0 on all scopes.
Did I lose the trained agent data when I set the IsDone flag to 1 after training?.
My next step was to try to save the trained agent with adding the following code found in the documentation, but still joy. My thoughts are I have overwritten and lost the trained data!
save("initialAgent.mat","agentSS")
load('initialAgent.mat')
rng(0) 
simOptions = rlSimulationOptions('MaxSteps',maxsteps);
experience = sim(env,agentSS,simOptions);
How can I add code to ensure the trained agent data is saved automatically via 'RLTrainingOptions' after training has been completed, such as when maxepisodes are reached? Do not want to make the same mistake.
Is this correct?
trainingOpts = rlTrainingOptions(...
    'MaxEpisodes',maxepisodes, ...
    'MaxStepsPerEpisode',maxsteps, ...
    'StopTrainingCriteria','AverageReward',...
    'StopTrainingValue',-100,... 
    'ScoreAveragingWindowLength',100,...
    'SaveAgentCriteria',"EpisodeCount",...
    'SaveAgentValue',maxepisodes,...
    'SaveAgentDirectory',"savedAgents")
Thanks
Patrick
0 Comments
Accepted Answer
  Emmanouil Tzorakoleftherakis
    
 on 29 Apr 2021
        
      Edited: Emmanouil Tzorakoleftherakis
    
 on 29 Apr 2021
  
      Setting the IsDone flag to 1 does not erase the trained agent - it actually makes sense that the sim was not showing anything because it was immediately stopped by the IsDone flag. 
To save the final agent, simply add the save command you have right after when you call 'train'. 
My guess is that when you reran the whole script, you created a new agent from scratch and saved it again to a mat file, which replaced the already trained agent. This is why it's good practive to always have sections in your (live) script, so that you can pick exactly what lines you want to run.
3 Comments
  Apoorv Pandey
 on 27 Feb 2023
				How to save multiple agents like in this example 
https://in.mathworks.com/help/reinforcement-learning/ug/train-agents-for-path-following.html
  Zaid Jaber
 on 14 Nov 2023
				Hi 
How i can use that file to resume training with more number of episodes ?
More Answers (0)
See Also
Categories
				Find more on Training and Simulation in Help Center and File Exchange
			
	Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!