
Emmanouil Tzorakoleftherakis
MathWorks
Statistics
RANK
114
of 281 902
REPUTATION
1 038
CONTRIBUTIONS
0 Questions
392 Answers
ANSWER ACCEPTANCE
0.00%
VOTES RECEIVED
99
RANK
12 665 of 19 064
REPUTATION
23
AVERAGE RATING
0.00
CONTRIBUTIONS
1 File
DOWNLOADS
4
ALL TIME DOWNLOADS
204
RANK
of 134 283
CONTRIBUTIONS
0 Problems
0 Solutions
SCORE
0
NUMBER OF BADGES
0
CONTRIBUTIONS
0 Posts
CONTRIBUTIONS
0 Public Channels
AVERAGE RATING
CONTRIBUTIONS
0 Highlights
AVERAGE NO. OF LIKES
Content Feed
How can i set the constraints on states rather than input and output in mpc?
For linear MPC, you can add constraints on inputs and outputs only. You can either set the desired states to also be outputs (if...
2 dagar ago | 0
How do we know that the PI controller can be modeled using a single neuron?
The network used to model the PI controller is exactly this one actorNet = [ featureInputLayer(numObs) fullyConnected...
3 dagar ago | 0
Using a Simulink dynamic motorcycle in a driving scenario trajectory.
Here is a video that shows how to do that using Model Predictive Control Toolbox: https://www.mathworks.com/videos/understandin...
6 dagar ago | 0
question about external action of DDPG
The loss function does not change. What happens is that the experience buffer is populated with the action from the external sig...
6 dagar ago | 0
| accepted
use a linear state space model in NLMPC
Is there a reason you want to convert to nonlinear MPC? If you get a good solution with linear MPC, going to nonlinear will only...
6 dagar ago | 0
Integral MPC in Simulink
Why don't you use the MVRate constraint instead of adding the term in the cost function? https://www.mathworks.com/help/mpc/ug...
6 dagar ago | 0
Why reinforcement learning has different results of action between sim() and getAction()?
Hi, Which release are you using? We tried in R2023a and R2023b with UseExplorationPolicy =0 and getAction and sim provide the s...
6 dagar ago | 0
Epsilon greedy policy for DQN
You can use the formula here to calculate the epsilon value
6 dagar ago | 1
| accepted
Reinforcement learning agent for mixed action space.
Reinforcement Learning Toolbox does not support agents with both continuous and discrete actions. Can you share some more detail...
6 dagar ago | 0
can i decide the RL agents actions
It seems like the paper you saw uses some logic to implement the behavior you mention. You could do the same with an if statemen...
6 dagar ago | 0
The agent can learn the policy through the external action port in the RL Agent so that the agent mimics the output of the reference signal
It seems the agent started learning how to imitate the existing controller but needs more time. What does the Episode Manager lo...
6 dagar ago | 0
How to import a model built by comsol in the reinforcement learning designer
I haven't used Comson before but it seems that you may be able to co-simulate your model with Simulink. In that case, the proces...
6 dagar ago | 0
Multi-Agent Reinforcement learning
As of R2023b, you can do multi-agent reinforcement learning using MATLAB environments. Please take a look at this example and R2...
ungefär 2 månader ago | 0
How can i scale the action of DDPG agent in Reinforcement Learning?
DDPG training works by adding noise on top of the actor output to promote exploration. In that case you may see constraint viola...
ungefär 2 månader ago | 0
How I can define eight discrete actions in RL section
The implementation shown here is one option. Hope that helps
ungefär 2 månader ago | 0
In SImulink, during DDPG training to regulate the CO2 concentration the environment is not simulating. I can see only the variables specified in function is updating.
Hello, I would start by taking a look at the output of the agent. If the agent output does not make sense, the environment will...
3 månader ago | 0
5G Handover with Reinforcement Learning, mismatch of input channels and observations in reinforcement learning representation
I suspect you did not set up your critic network properly. If you share that code snippet we can take a closer look. An alternat...
3 månader ago | 0
Implementing mpctools package (from Rawlings group) in Simulink
I cannot comments on mpctools, but if your objective is to use IPOPT in Simulink, Model Predictive Control Toolbox allows you to...
3 månader ago | 0
I want to print out multiple actions in reinforcement learning
Hi, If you want to create an agent that outputs multiple actions, you need to make sure the actor network is set up accordingly...
3 månader ago | 0
Issue with Q0 Convergence during Training using PPO Agent
It seems you set the training to stop when the episode reward reaches the value of 0.985*(Tf/Ts)*3. I cannot comment on the valu...
3 månader ago | 1
| accepted
Where is the actual storage location of the RL agent's weights.
Hello, You can implement the trained policy with automatic code generation, e.g. with MATLAB Coder, Simulink Coder and so on. Y...
3 månader ago | 0
How do I find the objective/cost function for the example Valet parking using multistage NLMPC. (https://www.mathworks.com/help/mpc/ug/parking-valet-using-nonlinear-model-pred
Hi, The example you mentioned used MPC on two occasions: 1) On the outer loop for planning through the Vehicle Path Plannerblo...
3 månader ago | 0
Replace RL type (PPO with DPPG) in a Matlab example
PPO is a stochastic agent whereas DDPG is deterministic. This means that you cannot just use actors and critics designed for PPO...
3 månader ago | 1
| accepted
NMPC Controller not buildable for Raspberry Pi
Hard to tell without providing more details but I have a suspicion that you are defining the state and const functions as anonym...
3 månader ago | 0
Regarding Default Terms in DNN
Which algorithm are you using? You can log loss data by following the guidelines here.
4 månader ago | 1
How to start, pause, log information, and continue a simscape simulation?
If you go for #2, why don't you set it so that you have episodes that are 10 seconds long? When each episode ends, change the i...
4 månader ago | 0
how to put some obstacles into my envrionment then to train my agent to avoid the obstacles and find a optimal path to follow using reiforment learning by simulink?
This example may be helpful.
4 månader ago | 0
how to get the cost function result from model predictive controller?
Please take a look at the doc page of mpcmove. The Info output containts a field called Cost. You can use it to visualize how th...
4 månader ago | 0
The solution obtained with the nlmpcmove function of the mpc toolbox is not "reproducible"?
Hi, For problem 1: I am not sure what's inside that state function but presumably there is some integrator that gives you k+1....
4 månader ago | 0
How to keep actions values at minimum before disturbance and let the agent choose different action values only after the disturbance?
Please take a look here. As of R2022a you can place the RL policy block inside a triggered subsystem and only enable the subsyst...
4 månader ago | 0