DDPG Agent: Not stabilizing creating an unstable model

4 views (last 30 days)
Dear MATLAB,
Am training a DDPG agent on randomly set straight lines (levels) and later testing on a benchmark waveform. Shouldn't the training stablize over time and create a stable model? At 960 episodes the saved agent seems to perform better than at 2180 episodes. Both agents saved for avg.rewards over 50 episodes and > 25 K. Also the difference between model saved at 940 versus 960 episodes seems drastic.
In the picture below are the Episode Manager showing the avg.rewards (over 50 episodes) going up and down several times. One would expect it to look like the dark green line, stablizing over time? What change can I make to create a stable model?
Action space: 1.0 to 10.0, continuos
Test wave-form: 2000 seconds long
Training sample time and simulation length: Ts: 1 and Tf=250
Hyper-parameters: Learning Rates Critic = 1e-03, Actor = 1e-04 | Gamma (discount) = 0.95, Batch size = 64
Neurons: Obsv. path: FC1 = 64, FC2 = 24 and Actor path FC1 = 24
DDPG Noise Variance = 0.1, VarianceDecayRate = 1e-5 (Have tried Noise Variance 0.45 too and decay at 1e-3, 1e-4 etc.)
(For a higher res. image please see attached)
V.9.94.4_MATLAB_16-Dec-2019.jpg

Answers (1)

Rajesh Siraskar
Rajesh Siraskar on 20 Dec 2019
Based on several rounds of training, my personal observation is that RL will converge initially to an optimal expected value.
Any training beyond that simply seems to not help. I think it is important to stop when we realize that it has reached the optimum.
  1 Comment
Emmanouil Tzorakoleftherakis
+1 on that. It could for example be the case that you reach a point in training where you have a decent policy, but exploration of the agent leads the search somewhere else (pros and cons of sample-based gradients).

Sign in to comment.

Products


Release

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!