Training of RL DDPG Agent is not working (Control of an Inverted pendulum)
Show older comments
This project initially started with a Mathworks example: Train DDPG Agent to swing up and balance pendulum.
The pendulum block in the model has been replaced with simscape components. Also the following have been added: a DC electric motor, and a controllable voltage supply. See my_simscape_pendulum_model.slx
I trained the agent is using the settings in training.m
The session was stopped after 17 hours and 796 episodes. Early on I could see the pendulum rising up to about 30 degrees above the downward hanging position before it stalled. This indicates to me that there was enough torque being applied to enable the agent use a back and forth rocking motion to raise the pendulum. However, after many hours the agent had not learned to do the back and forth rocking motion, and seemed to be stalled in a bad policy. See the screenshot of the RL episode manager after it was stopped.
My research indicates that my learning rate or exploration options may need to be modified. However I have not been able to find documentation on how to do this.
Do you have any suggestions ?
Accepted Answer
More Answers (0)
Categories
Find more on Reinforcement Learning in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!