Events
Seminar @ Cornell Tech: Michelle Lee
Fusion for Robot Perception and Controls
Machine learning has led to powerful advances in robotics: deep learning for visual perception from raw images and deep reinforcement learning (RL) for learning controls from trial and error. Yet, these black-box techniques can often require large amounts of data, have results difficult to interpret, and fail catastrophically when dealing with out-of-distribution data. In this talk, I will introduce the concept of “fusion” in robot perception and controls for robust, sample efficient, and generalizable robot learning. On the perception side, we fuse multiple sensor modalities and demonstrate generalization to new task instances and robustness to sensor failures that are out-of-distribution. On the controls side, we leverage fusion by combining known models with learned policies, making our policy learning substantially more sample efficient.
Speaker Bio
Michelle A. Lee is a final year Ph.D. candidate in the Stanford AI Lab at Stanford University. She works in the Interactive Perception and Robot Learning lab advised by Prof. Jeannette Bohg and is a collaborator in the People, AI, Robots group, led by Fei-Fei Li and Silvio Savarese. Working in the intersection of perception, controls, and robot learning, her research interests lie in developing data-driven algorithms for real-world robotic manipulation tasks. She is currently working on how fusion of policies and multimodal state inputs can enable algorithmic generalization, robustness, and efficiency. Previously, she has conducted research at the NVIDIA Robotics Lab. Her work has received best paper awards at ICRA 2019 and the NeuriPS 2019 Robot Learning workshop.