Spatial Perception for Robots and Autonomous Vehicles: Certifiable Algorithms and Human-level Understanding
Speaker: Luca Carlone, MIT
June 29 1pm EDT
Register for the UofT Robotics Newsletter to learn more about our Seminar Series – seminars are open to all UofT students and faculty.
Abstract:
Spatial perception has witnessed unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. While many applications can afford occasional failures (e.g., AR/VR, domestic robotics) or can structure the environment to simplify perception (e.g., industrial robotics), safety-critical applications of robotics in the wild, ranging from self-driving vehicles to search & rescue, demand a new generation of algorithms.
This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. I discuss the foundations of certifiable perception and motivate how these foundations can lead to safer systems while circumventing the intrinsic computational intractability of typical perception problems.
The second effort targets high-level understanding. While humans are able to quickly grasp both geometric and semantic aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our recent work on actionable hierarchical representations, 3D Dynamic Scene Graphs, and discuss their potential impact on planning and decision-making, human-robot interaction, and long-term autonomy. The creation of a Dynamic Scene Graph requires a variety of algorithms, ranging from model-based estimation to deep learning, and offers new opportunities for both researchers and practitioners.
Spatial perception has witnessed unprecedented progress in the last decade. Robots are now able to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. While many applications can afford occasional failures (e.g., AR/VR, domestic robotics) or can structure the environment to simplify perception (e.g., industrial robotics), safety-critical applications of robotics in the wild, ranging from self-driving vehicles to search & rescue, demand a new generation of algorithms.
Bio:
Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. His work includes seminal results on certifiably correct algorithms for localization and mapping, as well as approaches for visual-inertial navigation and distributed mapping. He is a recipient of the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the best paper award at WAFR’16, the Best Student paper award at the 2018 Symposium on VLSI Circuits, and he was a best paper finalist at RSS’15. At MIT, he teaches “Robotics: Science and Systems,” the introduction to robotics for MIT undergraduates, and he created the graduate-level course “Visual Navigation for Autonomous Vehicles”, which covers mathematical foundations and fast C++ implementations of spatial perception algorithms for drones and autonomous vehicles.