Takahiro Miki, Joonho Lee, Lorenz Wellhausen and Marco Hutter This paper is accepted to ICRA2024. 0:00 Introduction 0:20 Method overview 0:27 Low-level policy training 0:40 Low-level policy testing 1:16 High-level policy training 1:57 High-level policy distillation 2:12 High-level policy testing Abstract: Legged robots have the potential to traverse complex terrain and access confined spaces beyond the reach of traditional platforms thanks to their ability to carefully select footholds and flexibly adapt their body posture while walking. However, robust deployment in real-world applications is still an open challenge. In this paper, we present a method for legged locomotion control using reinforcement learning and 3D volumetric representations to enable robust and versatile locomotion in confined and unstructured environments. By employing a two-layer hierarchical policy structure, we exploit the capabilities of a highly robust low-level policy to follow 6D commands and a high-level policy to enable three-dimensional spatial awareness for navigating under overhanging obstacles. Our study includes the development of a procedural terrain generator to create diverse training environments. We present a series of experimental evaluations in both simulation and real-world settings, demonstrating the effectiveness of our approach in controlling a quadruped robot in confined, rough terrain. By achieving this, our work extends the applicability of legged robots to a broader range of scenarios.
Hide player controls
Hide resume playing