Myvideo

Guest

Login

TUM AI Lecture Series - Learning to Walk with Vision and Proprioception (Jitendra Malik)

Uploaded By: Myvideo
1 view
0
0 votes
0

Abstract: Legged locomotion is commonly studied and programmed as a discrete set of structured gait patterns, like walk, trot, gallop. However, studies of children learning to walk (Adolph et al) show that real-world locomotion is often quite unstructured and more like “bouts of intermittent steps“. We have developed a general approach to walking which is built on learning on varied terrains in simulation and then fast online adaptation (fractions of a second) in the real world. This is made possible by our Rapid Motor Adaptation (RMA) algorithm. RMA consists of two components: a base policy and an adaptation module, both of which can be trained in simulation. We thus learn walking policies that are much more flexible and adaptable. In our set-up gaits emerge as a consequence of minimizing energy consumption at different target speeds, consistent with various animal motor studies. We then incrementally add a navigation layer to the robot from onboard cameras and tightly couple it with locomot

Share with your friends

Link:

Embed:

Video Size:

Custom size:

x

Add to Playlist:

Favorites
My Playlist
Watch Later