Closing the perception-action loop with deep robotic learning

Lecture / Panel
For NYU Community

Speaker: Yuke Zhu, Stanford University

Abstract: Robots and autonomous systems have been playing a significant role in the modern economy. Custom-built robots have remarkably improved productivity, operational safety, and product quality. However, these robots are usually programmed for specific tasks in well-controlled environments, unable to perform diverse tasks in the real world. In this talk, I will demonstrate how machine learning techniques, such as deep neural networks, offer important computational tools towards building more effective and generalizable robot intelligence. I will discuss my research on learning-based methods that established a tighter coupling between perception and action at three levels of abstraction: 1) learning primitive motor skills from raw sensory data, 2) sharing knowledge between sequential tasks in visual environments, and 3) learning hierarchical task structures from video demonstrations.

About the Speaker: Yuke Zhu is a final year Ph.D. candidate in the Department of Computer Science at Stanford University, advised by Prof. Fei-Fei Li and Prof. Silvio Savarese. His research interests lie at the intersection of machine learning, computer vision, and robotics. His work builds machine learning algorithms for general-purpose robots. He received a Master's degree from Stanford University and dual Bachelor's degrees from Zhejiang University and Simon Fraser University. He also collaborated with research labs including Snap Research, Allen Institute for Artificial Intelligence, and DeepMind.