Continuous control problem with reinforcement learning
3 views (last 30 days)
Show older comments
Hi everyone, I'm trying to control my microgrid model by using RL. I know there's a water tank example in Matlab/Simulink, what it does is replacing the PID controller with an RL agent, similarly, I also created my Simulink model and implement the RL agent with the environment and observation blocks.
My questions are:
1) In a similar water tank problem (it has a scaler desired value as 10), I am giving a load as a scalar value, but I actually want to give a vector in the same length with simulation time and use the corresponding load at each second during the simulation.
2) My observation and action have upper and lower bounds and I defined them by following lines:
obsInfo = rlNumericSpec([3 1],...
'LowerLimit',[0 0 0]',...
'UpperLimit',[1000 1000 250]');
However, my simulation does not follow the upper bound for the third bound.
Do you have any idea about these things?
Thanks for any suggestions in advance.
0 Comments
Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!