“Learning Agile Soccer Skills for a Bipedal Robot With Deep Reinforcement Learning”, 2023-04-26 ():
We investigate whether Deep Reinforcement Learning (Deep RL) is able to synthesize sophisticated and safe movement skills for a low-cost, miniature humanoid robot that can be composed into complex behavioral strategies in dynamic environments.
We used Deep RL to train a humanoid robot with 20 actuated joints to play a simplified one-versus-one (1v1) soccer game. We first trained individual skills in isolation and then composed those skills end-to-end in a self-play setting. The resulting policy exhibits robust and dynamic movement skills such as rapid fall recovery, walking, turning, kicking and more; and transitions between them in a smooth, stable, and efficient manner—well beyond what is intuitively expected from the robot. The agents also developed a basic strategic understanding of the game, and learned, for instance, to anticipate ball movements and to block opponent shots. The full range of behaviors emerged from a small set of simple rewards.
Our agents were trained in simulation and transferred to real robots zero-shot. We found that a combination of sufficiently high-frequency control, targeted dynamics randomization, and perturbations during training in simulation enabled good-quality transfer, despite unmodeled effects and variations across robot instances. Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training led the robots to learn safe and effective movements while still performing in a dynamic and agile way.
Indeed, even though the agents were optimized for scoring, in experiments they walked 156% faster, took 63% less time to get up, and kicked 24% faster than a scripted baseline, while efficiently combining the skills to achieve the longer term objectives.
Examples of the emergent behaviors and full 1v1 matches are available on the supplementary website:
“Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning”.
“Unedited demonstration matches for Robotis OP3 1v1 robot soccer.”.
5 one-versus-one matches. These matches are representative of the typical behavior and gameplay of the fully trained soccer agent.
Recurring skills and strategies selected from typical one-versus-one play. The agent demonstrates agile skills including getting up and turning; reactive behavior including kicking a moving ball; object interaction including ball control; dynamic defensive blocking; strategical play including defensive positioning. The agent also quickly transitions between skills (turning, chasing, controlling, then kicking, for example), and combines them (frequently turning and kicking, for example).
“OP3 soccer training in simulation”.
We first trained individual skills in isolation, in simulation, and then composed those skills end-to-end in a self-play setting. We found that a combination of sufficiently high-frequency control and targeted dynamics randomization and perturbations during training in simulation enabled good-quality transfer to the robot.
We analysed the agent’s performance in two set-pieces, to gauge the reliability of getting up and shooting behaviors and to measure the performance gap between the simulation and the real environment. We also compared behaviors with scripted baseline skills. In experiments, they walked 156% faster, took 63% less time to get up, and kicked 24% faster than a scripted baseline.
Although the robots are inherently fragile, minor hardware modifications together with basic regularization of the behavior during training lead to safe and effective movements while still being able to perform in a dynamic and agile way.
Preliminary Results: Learning from vision
We further investigate whether Deep Reinforcement Learning (Deep RL) agents can learn directly from raw egocentric vision. In this context the agent must learn to control its camera and integrate information over a window of egocentric viewpoints to predict various game aspects. Our preliminary analysis shows that Deep RL is a promising approach to solve this challenging problem with 10⁄10 goals scored in our simulation set piece and 6⁄10 scored on the real robot.