XML Persian Abstract Print

Abstract:   (426 Views)
Nowadays, collaborative multi-agent systems in which a group of agents work together to reach a common goal, are used to solve a wide range of problems. Cooperation between agents will bring benefits such as reduced operational costs, high scalability and significant adaptability. Usually, reinforcement learning is employed to achieve an optimal policy for these agents. Learning in collaborative multi-agent dynamic environments with large and stochastic state spaces has become a major challenge in many applications. These challenges include the effect of size of state space on learning time, ineffective collaboration between agents and the lack of appropriate coordination between decisions of agents. On the other hand, using reinforcement learning has challenges such as the difficulty of determination the appropriate learning goal or reward and the longtime of convergence due to the trial and error in learning. This paper, by introducing a communication framework for collaborative multi-agent systems, attempts to address some of these challenges in herding problem. To handle the problems of convergence, knowledge transfer has been utilized that can significantly increase the efficiency of reinforcement learning algorithms. Cooperation and Coordination and between the agents is carried out through the existence of a head agent in each group of agents and a coordinator agent respectively. This framework has been successfully applied to herding problem instances and experimental results have revealed a significant improvement in the performance of agents.
Type of Article: Research paper | Subject: Special
Received: 2019/01/20 | Accepted: 2019/12/26 | Published: 2019/08/15

Add your comments about this article : Your username or Email:

Send email to the article author

© 2020 All Rights Reserved | Journal of Control

Designed & Developed by : Yektaweb