ā€œMore Agents Is All You Needā€, Junyou Li, Qin Zhang, Yangbin Yu, Qiang Fu, Deheng Ye2024-02-03 ()⁠:

[poorly reinventing inner-monologue self-distillation… minus the distillation.] We find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated [but flatlines hard after just ~10 ā€˜agents’, possibly handicapped by flattened-logits]. Also, this method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty.

We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence.

Our code is publicly available at: https://anonymous.4open.science/api/repo/more_agent_is_all_you_need/file/.