Seminar: Jana Tumova on Formal methods for planning with safety constraints and preferences
Abstract: As autonomous robots move from enclosed environment to our everyday lives, we have to ask: How can we ensure that they work as expected and how can we even specify what it means to work as expected? Formal methods have proven to be useful in addressing these questions; temporal logics provide rich, rigorous, yet user-friendly specification formalism and formal synthesis offers a way to automatically generate a plan that provably satisfies the specification (or satisfies it to some guaranteed extent).
In this talk, we focus on specification of safety constraints and preferences in Signal Temporal Logic (STL) that allows to capture a variety of spatio-temporal requirements. We introduce quantitative semantics of STL to distinguish between more and less compliant and preferred motion plans, and present an RRT*-based motion planning algorithm that converges to the maximally satisfying plan. We then expand the concept of maximally satisfying planning to risk-aware planning in uncertain environments, such as autonomous vehicles in traffic scenarios. Here, we take into account not only the severity of violating a given specification, but also the probability of specification violation and the degree of uncertainty. Finally, we discuss a formal methods-based solution to improve safety of systems with complex dynamics that are difficult to model analytically. In particular, we address safety of data-driven control for contact-rich manipulation.
Speaker Bio: Jana Tumova is an associate professor at the School of Electrical Engineering and Computer Science at KTH Royal Institute of Technology. She received PhD in computer science from Masaryk University and was awarded ACCESS postdoctoral fellowship at KTH in 2013. She was also a visiting researcher at MIT, Boston University, and Singapore-MIT Alliance for Research and Technology. Her research interests include formal methods applied in decision making, motion planning, and control of autonomous systems. Among other projects, she is a recipient of a Swedish Research Council Starting Grant to explore compositional planning for multi-agent systems under temporal logic goals and a WASP Expeditions and WASP NEST projects focusing on design of correct-by-design and socially acceptable autonomous systems. She was awarded an Early Career Spotlight award at Robotics: Science and Systems 2021.
Seminar: Kostas Daniilidis on Semantic SLAM to Semantic Navigation
Abstract: Progress in visual localization and mapping has led to deployed systems resulting in accurate metric localization and, recently, maps enriched with semantic labels. However, when a navigation command entails a semantic target like “go to the oven,” current SLAM systems have to explore the whole environment to find the oven.
We argue that a robot has to learn how to map by predicting the semantic and occupancy information outside the field of view. When dropped into an unseen environment, it can predict where a semantic target might be located and follow a policy according to the upper confidence bound. We present results in semantic mapping and in the PointNav and ObjectNav benchmarks. Moreover, we show how mapping can be facilitated by language instructions. To solve semantic mapping and navigation we face the challenges of uncertainty estimation and equivariant mappings which we will elaborate in the talk.
Bio: Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania where he has been faculty since 1998. He is an IEEE Fellow. He was the director of the GRASP laboratory from 2008 to 2013. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD (Dr.rer.nat.) in Computer Science from the University of Karlsruhe, 1992, under the supervision of Hans-Hellmut Nagel. He received the Best Conference Paper Award at ICRA 2017. His most cited works have been at the intersection of learning and geometry and visual navigation, and particularly on 3D human pose estimation, equivariance, neuromorphic vision, hand-eye calibration, and visual odometry and mapping.
Roberto Calandra on Perceiving, Understanding, & Interacting Through Touch
Abstract: Touch is a crucial sensor modality in both humans and robots. Recent advances in tactile sensing hardware have resulted -- for the first time -- in the availability of mass-produced, high-resolution, inexpensive, and reliable tactile sensors. In this talk, I will argue for the importance of creating a new computational field of "Touch processing" dedicated to the processing and understanding of touch, similarly to what computer vision is for vision. This new field will present significant challenges both in terms of research and engineering. To start addressing some of these challenges, I will introduce our open-source ecosystem dedicated to touch sensing research. Finally, I will present some applications of touch in robotics and discuss other future applications.
Bio: Roberto Calandra is a Research Scientist at Meta AI (formerly Facebook AI Research). Previously, he was a Postdoctoral Scholar at the University of California, Berkeley (US) in the Berkeley Artificial Intelligence Research Laboratory (BAIR). His education includes a Ph.D. from TU Darmstadt (Germany), a M.Sc. in Machine Learning and Data Mining from the Aalto university (Finland), and a B.Sc. in Computer Science from the Università degli studi di Palermo (Italy). His scientific interests are broadly at the conjunction of Decision-making, Robotics and Machine Learning. He served the scientific community, among other, by being Program Chair for AISTATS 2020, Guest Editor for JMLR, and co-organizing over 17 workshops at international conferences (NeurIPS, ICML, ICLR, ICRA, IROS, RSS). He also led the development of DIGIT -- the first commercial compact high-resolution tactile sensor.