“NAVIX: Scaling MiniGrid Environments With JAX”, Eduardo Pignatelli, Jarek Liesen, Robert Tjarko Lange, Chris Lu, Pablo Samuel Castro, Laura Toni2024-07-28 (, ; backlinks; similar)⁠:

As Deep Reinforcement Learning (DRL) research moves towards solving large-scale worlds, efficient environment simulations become crucial for rapid experimentation. However, most existing environments struggle to scale to high throughput, setting back meaningful progress. Interactions are typically computed on the CPU, limiting training speed and throughput, due to slower computation and communication overhead when distributing the task across multiple machines. Ultimately, DRL training is CPU-bound, and developing batched, fast, and scalable environments has become a frontier for progress.

Among the most used RL environments, MiniGrid is at the foundation of several studies on exploration, curriculum learning, representation learning, diversity, meta-learning, credit assignment, and language-conditioned RL, and still suffers from the limitations described above.

In this work, we introduce NAVIX, a re-implementation of MiniGrid in JAX. NAVIX achieves over 200,000× speed improvements in batch mode, supporting up to 2,048 agents in parallel on a single Nvidia A100 80 GB.

This reduces experiment times from one week to 15 minutes, promoting faster design iterations and more scalable RL model development.