“If Multi-Agent Learning Is the Answer, What Is the Question?”, 2007-05 ():
The area of learning in multi-agent systems is today one of the most fertile grounds for interaction between game theory and artificial intelligence. We focus on the foundational questions in this interdisciplinary area, and identify several distinct agendas that ought to, we argue, be separated. The goal of this article is to start a discussion in the research community that will result in firmer foundations for the area.
…Indeed, upon close examination, it becomes clear that the very foundations of MAL could benefit from explicit discussion. What exact question or questions is MAL addressing? What are the yardsticks by which to measure answers to these questions? The present article focuses on these foundational questions. To start with the punch line, following an extensive look at the literature we have reached two conclusions:
There are several different agendas being pursued in the MAL literature. They are often left implicit and conflated; the result is that it is hard to evaluate and compare results.
We ourselves can identify and make sense of 5 distinct research agendas.
Not all work in the field falls into one of the 5 agendas we identify. This is not necessarily a critique of work that doesn’t; it simply means that one must identify yet other well-motivated and well defined problems addressed by that work. We expect that as a result of our throwing down the gauntlet additional such problems will be defined, but also that some past work will be re-evaluated and reconstructed. Certainly we hope that future work will always be conducted and evaluated against well-defined criteria, guided by this article and the discussion engendered by it among our colleagues in AI and game theory. In general we view this article not as a final statement but as the start of a discussion. In order to get to the punch line outlined above, we proceed as follows. In the next section we define the formal setting on which we focus. In §3 we illustrate why the question of learning in multi-agent settings is inherently more complex than in the single-agent setting, and why it places a stress on basic game theoretic notions. In §4 we provide some concrete examples of MAL approaches from both game theory and AI. This is anything but a comprehensive coverage of the area, and the selection is not a value judgment. Our intention is to anchor the discussion in something concrete for the benefit of the reader who is not familiar with the area, and—within the formal confines we discuss in §2—the examples span the space of MAL reasonably well. In §5 we identify 5 different agendas that we see (usually) implicit in the literature, and which we argue should be made explicit and teased apart. We end in §6 with a summary of the main points made in this article.
…In this article we have made the following points:
Learning in MAS is conceptually, not only technically, challenging.
One needs to be crystal clear about the problem being addressed and the associated evaluation criteria.
For the field to advance one cannot simply define arbitrary learning strategies, and analyze whether the resulting dynamics converge in certain cases to a Nash equilibrium or some other solution concept of the stage game. This in and of itself is not well motivated.
We have identified 5 coherent agendas.
Not all work in the field falls into one of these buckets. This means that either we need more buckets, or some work needs to be revisited or reconstructed so as to be well grounded.
See Also:
A Survey and Critique of Multiagent Deep Reinforcement Learning
A Review of Cooperative Multi-Agent Deep Reinforcement Learning
Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms
Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities
Collective Intelligence for Deep Learning: A Survey of Recent Developments
From reinforcement learning to agency: Frameworks for understanding basal cognition