Animesh Garg and collaborators win Best Student Paper Award at RSS’21

UofT Robotics faculty had five papers accepted at Robotics Science and Systems this year, including one best student paper award! Congrats to Animesh Garg and co-authors for their winning paper, DiSECt: A Differentiable Simulation Engine for Autonomous Robotic Cutting (Eric Heiden, Miles Macklin, Yashraj S Narang, Dieter Fox, Animesh Garg, Fabio Ramos). See all the UofT Robotics papers at RSS below.

UofT authors are noted in bold.

GIFT: Generalizable Interaction-aware Functional Tool Affordances without Labels
Dylan Turpin, Liquan Wang, Stavros Tsogkas, Sven Dickinson, Animesh Garg

Tool use requires reasoning about the fit between an object’s affordances and the demands of a task. Visual affordance learning can benefit from goal-directed interaction experience; but current techniques rely on human labels or expert demonstrations to generate this data. In this paper; we describe a method that grounds affordances in physical interactions instead; thus removing the need for human labels or expert policies. We use an efficient sampling-based method to generate successful trajectories that provide contact data; which are then used to reveal affordance representations. Our framework; GIFT; operates in two phases: first; we discover visual affordances from goal-directed interaction with a set of procedurally generated tools; second; we train a model to predict new instances of the discovered affordances on novel tools in a self-supervised fashion. In our experiments; we show that GIFT can leverage a sparse keypoint representation to predict grasp and interaction points to accommodate multiple tasks; such as hooking; reaching; and hammering. GIFT outperforms baselines on all tasks and matches a human oracle on two of three tasks using novel tools.


Radar Odometry Combining Probabilistic Estimation and Unsupervised Feature Learning
Keenan Burnett, David J. Yoon, Angela P Schoellig, Tim Barfoot

This paper presents a radar odometry method that combines probabilistic trajectory estimation and deep learned features without needing groundtruth pose information. The feature network is trained unsupervised; using only the on-board radar data. With its theoretical foundation based on a data likelihood objective; our method leverages a deep network for processing rich radar data; and a non-differentiable classic estimator for probabilistic inference. We provide extensive experimental results on both the publicly available Oxford Radar RobotCar Dataset and an additional 100 km of driving collected in an urban setting. Our sliding-window implementation of radar odometry outperforms most hand-crafted methods and approaches the current state of the art without requiring a groundtruth trajectory for training. We also demonstrate the effectiveness of radar odometry under adverse weather conditions. Code for this project can be found at:


Robust Value Iteration for Continuous Control Tasks
Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

When transferring a control policy from simulation to a physical system; this policy needs to be robust to variations in the dynamics to perform well. Commonly; the optimal policy overfits to the approximate model and the corresponding state-distribution. Therefore; the policy fails when transferred to the physical system. In this paper; we are presenting robust value iteration. This approach uses dynamic programming to compute the optimal value function on the compact state domain and incorporates adversarial perturbations of the system dynamics. The adversarial perturbations cause the resulting optimal policy to be robust to changes in the dynamics. Utilizing the continuous time perspective of reinforcement learning; we derive the optimal perturbations for the states; actions; observations and model parameters in closed-form. The resulting algorithm does not require discretization of states or actions. Therefore; the optimal adversarial perturbations can be efficiently incorporated in the min-max value function update. We apply the resulting algorithm to the physical Furuta Pendulum and cartpole. By changing the masses of the systems we evaluate the quantitative and qualitative performance across different model parameters. We show that robust value iteration is more robust compared to deep reinforcement learning algorithm and the non-robust version of the algorithm.

DiSECt: A Differentiable Simulation Engine for Autonomous Robotic Cutting
Eric Heiden, Miles Macklin, Yashraj S Narang, Dieter Fox, Animesh Garg, Fabio Ramos

Robotic cutting of soft materials is critical for applications such as food processing; household automation; and surgical manipulation. As in other areas of robotics; simulators can facilitate controller verification; policy learning; and dataset generation. Moreover; differentiable simulators can enable gradient-based optimization; which is invaluable for calibrating simulation parameters and optimizing controllers. In this work; we present DiSECt: the first differentiable simulator for cutting soft materials. The simulator augments the finite element method (FEM) with a continuous contact model based on signed distance fields (SDF); as well as a continuous damage model that inserts springs on opposite sides of the cutting plane and allows them to weaken until zero stiffness; enabling crack formation. Through various experiments; we evaluate the performance of the simulator. We first show that the simulator can be calibrated to match resultant forces and deformation fields from a state-of-the-art commercial solver and real-world cutting datasets; with generality across cutting velocities and object instances. We then show that Bayesian inference can be performed efficiently by leveraging the differentiability of the simulator; estimating posteriors over hundreds of parameters in a fraction of the time of derivative-free methods. Finally; we illustrate that control parameters in the simulation can be optimized to minimize cutting forces via lateral slicing motions. We publish videos and additional results on our project website at


GROUNDED: The Localizing Ground Penetrating Radar Evaluation Dataset
Teddy Ort, Igor Gilitschenski, Daniela Rus

Mapping and localization using surface features is prone to failure due to environment changes such as inclement weather. Recently; Localizing Ground Penetrating Radar (LGPR) has been proposed as an alternative means of localizing using underground features that are stable over time and less affected by surface conditions. However; due to the lack of commercially available LGPR sensors; the wider research community has been largely unable to replicate this work or build new and innovative solutions. We present GROUNDED an open dataset of LGPR scans collected in a variety of environments and weather conditions. By labeling this data with ground truth localization from an RTK-GPS / Inertial Navigation System; and carefully calibrating and time synchronizing the radar scans with ground truth positions; camera imagery; and Lidar data; we enable researchers to build novel localization solutions that are resilient to changing surface conditions. We include 108 individual runs totalling 450 km of driving with LGPR; GPS; Odometry; Camera; and Lidar measurements. We also present two new evaluation benchmarks for 1) Localizing in Weather and 2) Multi-lane Mapping; to enable comparisons of future work supported by the dataset. The dataset can be accessed at

© 2020 Faculty of Applied Science & Engineering