06-08-2020 Seminar: Stanford’s Jeanette Bohg on Embracing Uncertainty in Robotic Manipulation

Acceptance Over Ignorance: How to Embrace Uncertainty in Robotic Manipulation

Speaker: Prof. Jeanette Bohg, Interactive Perception and Robot Learning Lab @ Stanford University
June 8, 2020 @ 1pm

Register for the UofT Robotics Newsletter to learn more about our Seminar Series – seminars are open to all UofT students and faculty

Abstract
My research is driven by the puzzle of why humans can effortlessly manipulate any kind of object while it is so hard to reproduce this skill on a robot. Humans can easily cope with uncertainty in perceiving the environment and in the effect of manipulation actions. One hypothesis is that humans are exceptionally accurate in perceiving and predicting how their environment will evolve. Therefore, improving the accuracy of perception and prediction is one way forward. In this talk, I would like to advocate for a different view on this problem: What if we will never reach perfect accuracy? If we accept that premise, then an important focus towards more robust robotic manipulation is to develop methods that can cope with a base level of uncertainty and unexpected events. In this talk, I will present three approaches that embrace uncertainty in robotic manipulation. First, I present an approach where one robot scaffolds the learning of another robot by optimally placing physical fixtures in the environment. When optimally placed, these fixtures basically funnel uncertainty and thereby dramatically increase learning speed of the manipulation task. Second, I present an approach that goes beyond a single manipulation tasks by performing task and motion planning. We propose to combine a logic planner with a trajectory optimiser, where the output is a sequence of Cartesian frames that are defined relative to an object. This object-centric approach has the advantage that the plan remains valid even if the environment changes in an unforeseen way. Third,  I present an approach for deformable object manipulation which is a challenging task due to a high-dimensional state space and complex dynamics. Despite large degrees of uncertainty, the system is robust thanks to a continuously re-planning model-predictive control approach. In this way, the system is inherently robust to unforeseen dynamics.

Bio
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interested in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several awards, most notably the 2019 IEEE International Conference on Robotics and Automation (ICRA) Best Paper Award, the 2019 IEEE Robotics and Automation Society Early Career Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper Award.