āMore Agents Is All You Needā, 2024-02-03 ()ā :
[poorly reinventing inner-monologue self-distillation⦠minus the distillation.] We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated [but flatlines hard after just ~10 āagentsā, possibly handicapped by flattened-logits]. Also, this method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty.
We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence.
Our code is publicly available at: https://anonymous.4open.science/api/repo/more_agent_is_all_you_need/file/.
View PDF: