“TLeague: A Framework for Competitive Self-Play Based Distributed Multi-Agent Reinforcement Learning”, Peng Sun, Jiechao Xiong, Lei Han, Xinghai Sun, Shuxing Li, Jiawei Xu, Meng Fang, Zhengyou Zhang2020-11-25 (, ; similar)⁠:

Competitive Self-Play (CSP) based Multi-Agent Reinforcement Learning (MARL) has shown phenomenal breakthroughs recently. Strong AIs are achieved for several benchmarks, including DoTA 2, Honor of Kings, Quake III, StarCraft II, to name a few. Despite the success, the MARL training is extremely data thirsty, requiring typically billions of (if not trillions of) frames be seen from the environment during training in order for learning a high performance agent. This poses non-trivial difficulties for researchers or engineers and prevents the application of MARL to a broader range of real-world problems.

To address this issue, in this manuscript we describe a framework, referred to as TLeague, that aims at large-scale training and implements several mainstream CSP-MARL algorithms. The training can be deployed in either a single machine or a cluster of hybrid machines (CPUs and GPUs), where the standard Kubernetes is supported in a cloud native manner. TLeague achieves a high throughput and a reasonable scale-up when performing distributed training. Thanks to the modular design, it is also easy to extend for solving other multi-agent problems or implementing and verifying MARL algorithms.

We present experiments over StarCraft II, ViZDoom and Pommerman to show the efficiency and effectiveness of TLeague.

The code is open-sourced and available at this URL.