Using RL, How to train multi-agents such that each agent will navigate from its initial position to goal position avoiding collisions?
4 views (last 30 days)
Show older comments
Let's assume there are a set of agents that are spread into 3d cartesian space. A trajectory should be generated for each agent such that if an agent would follow its trajectory while heading to the goal waypoint, no collision would happen with other agents. Any guidance to solve such a task would be highly appreciated
0 Comments
Answers (1)
Emmanouil Tzorakoleftherakis
on 5 Mar 2021
Edited: Emmanouil Tzorakoleftherakis
on 5 Mar 2021
It's possible that the scenario you described can be solved by training a single agent, and then "deploying" that trained agent to all uavs/uuvs in your fleet. That would make the problem easier and less expensive to train. For a 2D example, take a look at this.
3 Comments
Emmanouil Tzorakoleftherakis
on 6 Mar 2021
I think it's a matter of what inputs you provide to the policy and the coordinate system you use (although I was thinking the scenario where each agent has its own sensors). If you only use odometry data from all agents, I guess you could transform it to distance from each nearby agent (include heading/bearing probably) and feed all this info into the policy.
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!