Coordination in Collaborative Work by Deep Reinforcement Learning with Various State Descriptions

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cooperation and coordination are sophisticated behaviors and are still major issues in studies on multi-agent systems because how to cooperate and coordinate depends on not only environmental characteristics but also the behaviors/strategies that closely affect each other. On the other hand, recently using the multi-agent deep reinforcement learning (MADRL) has received much attention because of the possibility of learning and facilitating their coordinated behaviors. However, the characteristics of socially learned coordination structures have been not sufficiently clarified. In this paper, by focusing on the MADRL in which each agent has its own deep Q-networks (DQNs), we show that the different types of input to the network lead to various coordination structures, using the pickup and floor laying problem, which is an abstract form related to our target problem. We also indicate that the generated coordination structures affect the entire performance of multi-agent systems.

Cite

CITATION STYLE

APA

Miyashita, Y., & Sugawara, T. (2019). Coordination in Collaborative Work by Deep Reinforcement Learning with Various State Descriptions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11873 LNAI, pp. 550–558). Springer. https://doi.org/10.1007/978-3-030-33792-6_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free