Question regarding DDPG PMSM FOC control example

4 views (last 30 days)
Trying to do PMSM control similar to the DDPG model used but I have modelled the motor in terms of dq frame(vd, vq as input, id,iq,speed as output).
Is there a need to discretize the entire environment with different sampling times and use IIR filters if I am not going to use PWM? Or does the DDPG agent require the environment to be discrete?

Accepted Answer

Emmanouil Tzorakoleftherakis
All RL agents in Reinforcement Learning Toolbox operate at fixed discrete-time intervals by default. However, you do not need to do anything particular to discretize your Simulink model. In fact your model can run at variable integration step and the "sample time" parameter of the agent will determine how frequently the RL Agent block will be executed.
  2 Comments
Mohamed Hannan Sohail
Mohamed Hannan Sohail on 8 Mar 2021
Edited: Mohamed Hannan Sohail on 8 Mar 2021
Okay,understood. Thank you for answering!
One more question,
Is there anyway to speed up tthe training process? I have an nvidia gtx 1660ti with 6gb VRAM, i tried to set the device used to gpu in rlrepresentationoptions and use parallel to true but parallel processing crashes my computer.
It takes 1 hour to train for 10 episodes for the DDPG PMSM FOC control example
Emmanouil Tzorakoleftherakis
The FOC example is computationally expensive because the agent sample time is very small. Training speed is a combination of many design choices, including how long it takes for the model to simulate.
Even with parallelization, there is no guarantee of linear scaling in training time. I would still create a technical support case so that we can take a look at the crashing issue

Sign in to comment.

More Answers (0)

Tags

Products


Release

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!