Is it possible to achieve centralized learning in multi-agent RL with a customize function layer which is different among agents?
9 views (last 30 days)
Show older comments
I am trying to do multi-agent reinforcement learning in MATLAB. To achieve centralized learning process, all the agents have the same observation and action. Moreover, their neural networks are almost the same in addition to a customized function layer. The function layer is used to extract the corresponding sub observation from the whole observation for each agent, and hence is different among agents. I understand that I can give different observations to different agents so as to avoid such a function layer. However, as different agents have different observations, i.e., the dimension of observation, while the centralized learning process requiring all the agents to be identical, such a modification prohibits the centralized learning process.
So, may I ask what I can do after putting such a function layer in the neural network if I still want to go with the centralized learning? Honestly, the aforementioned function layer does not contain any learnables.
0 Comments
Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!