Robotics Institute Seminar Series
The Robotics Institute Seminar Series is an annual invited speaker series that features leading roboticists from across Canada and around the world. Join us to engage with speakers, faculty and students to learn more about the latest research at the intersection of robotics and artificial intelligence.
Seminars typically take place from 3-4 p.m. (EST) in-person in the Myhal Centre for Engineering Innovation & Entrepreneurship (Room 580) and are also live streamed to the Robotics Institute YouTube channel.
2024-25 SEMINAR SERIES
Additional speakers to be announced soon
FALL 2024
SEPTEMBER 13 | 3-4 PM (EST)
Peter Stone, University of Texas at Austin, Sony AI
Human-in-the-Loop Learning for Robot Navigation and Task Learning from Implicit Human Feedback
SEPTEMBER 20 | 3-4 PM (EST)
OCTOBER 11 | 3-4 PM (EST)
Hadas Kress-Gazit, Cornell University
Formal Methods for Robotics in the Age of Big Data
Room MY580 or watch on YouTube
While end-to-end, fully autonomous learning is interesting to explore, for real-world applications, including robotics, the paradigm of human-in-the-Loop learning has emerged as a practical way of guiding and speeding up the learning process. This talk will introduce some recent human-in-the-loop learning algorithms that enable robust navigation in challenging settings, such as in densely cluttered environments and over varying terrains. While most of these algorithms take explicit input from human trainers, the talk will close with a new paradigm for reinforcement learning from implicit human feedback, specifically observed facial expressions.
I am the founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin, as well as associate department chair and Director of Texas Robotics.
I was a co-founder of Cogitai, Inc. and am now Chief Scientist of Sony AI.
My main research interest in AI is understanding how we can best create complete intelligent agents. I consider adaptation, interaction, and embodiment to be essential capabilities of such agents. Thus, my research focuses mainly on machine learning, multiagent systems, and robotics. To me, the most exciting research topics are those inspired by challenging real-world problems. I believe that complete successful research includes both precise, novel algorithms and fully implemented and rigorously evaluated applications. My application domains have included robot soccer, autonomous bidding agents, autonomous vehicles, and human-interactive agents.
Robot design is an inherently difficult process that requires balancing multiple different aspects: kinematics and geometry, materials and compliance, actuation, fabrication, control complexity, power, and more. Computational design systems aim to simplify this process by helping designers check whether their designs are feasible and interdependencies are satisfied. But what can we say about when a design that accomplishes a task even exists? Or what the simplest design that does a job is?
In this talk, I will discuss recent work from my group in which we have discovered that, in some cases, design problems can be mapped to problems in robot planning, and that results derived in the planning space allow us to make formal statements about design feasibility. These ideas apply to systems as varied as traditional robot arms, dynamical quadrupeds, compliant manipulators, and modular truss structures. I will share examples from systems developed in my group and forecast forward on the implications of these results for future robot co-design.
Cynthia Sung is an Associate Professor in the Department of Mechanical Engineering and Applied Mechanics (MEAM) and a member of the General Robotics, Automation, Sensing & Perception (GRASP) lab at the University of Pennsylvania. She received a Ph.D. in Electrical Engineering and Computer Science from MIT in 2016 and a B.S. in Mechanical Engineering from Rice University in 2011. Her research interests are computational methods for design automation of robotic systems, with a particular focus on origami-inspired and compliant robots. She is the recipient of a 2024 ARO Early Career Award, 2023 ONR Young Investigator award, and a 2019 NSF CAREER award.
Website: sung.seas.upenn.edu
Formal methods - mathematical techniques for describing systems, capturing requirements, and providing guarantees - have been used to synthesize robot control from high-level specification, and to verify robot behavior. Given the recent advances in robot learning and data-driven models, what role can, and should, formal methods play in advancing robotics? In this talk Kress-Gazit will give an overview of the promise and challenges of formal methods for robotics and describe the synergies she sees with data-driven approaches.
Hadas Kress-Gazit is the Geoffrey S.M. Hedrick Sr. Professor at the Sibley School of Mechanical and Aerospace Engineering, and the Associate Dean of Engineering for Diversity and Academic Affairs at Cornell University. She received her Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania in 2008 and has been at Cornell since 2009. Her research focuses on formal methods for robotics and automation and more specifically on synthesis for robotics – automatically creating verifiable robot controllers for complex high-level tasks. Her group explores different types of robotic systems including modular robots, soft robots and swarms and synthesizes ideas from different communities such as robotics, formal methods, control, and hybrid systems. She is an IEEE fellow and has received multiple awards for her research, teaching and advocacy for groups traditionally underrepresented in STEM. She lives in Ithaca with her partner and two kids.
OCTOBER 25 | 3-4 PM (EST)
WINTER 2025
JANUARY 31 | 3 -4 PM (EST)
MARCH 17 | 10 AM - 2 PM (EST)
Aryan Rezaei Rad, University of Toronto
Robotic Fabrication and Computational Methods in Structural Engineering for Resilient Mass Timber Construction
Room details to come
PAST SEMINARS
Enhancing human mobility with agile robotic prostheses and orthoses
Even with the help of modern prosthetic and orthotic devices, individuals with lower-limb amputation, age-related motor deficits, or orthopedic disorders often struggle to navigate the home and community. Emerging powered prosthetic and orthotic devices could actively assist individuals to enable greater mobility, but these devices are typically designed to produce a small set of pre-defined motions. Although the field is beginning to embrace controllers that unify phases of the gait cycle, these devices still switch between distinct controllers for different tasks, e.g., uphill vs. downhill. This discrete control paradigm cannot continuously synchronize the robot’s motion to the variable activities of the human user. This talk will first present a new paradigm for controlling powered prosthetic legs over continuous variations of walking and stairs (i.e., different speeds and inclines), as well as continuous transitions between sitting and standing. These adaptable mid-level controllers facilitate a small activity space for intent classification, enabling amputee users to control activity transitions through intuitive, heuristic rules with over 99% accuracy. While these methods reproduce missing joint function, a different control philosophy is needed for exoskeletons that assist existing joint function. The last part of this talk will introduce an energetic control paradigm for backdrivable exoskeletons to reduce muscular effort by providing a faction of the human torque, without requiring explicit knowledge of the activity. This task-agnostic control method enabled a bilateral knee exoskeleton to mitigate the effects of quadriceps fatigue in able-bodied individuals during repetitive lifting-lowering and carrying over 5 terrains, thus reducing their risk for injuries due to fatigue-induced compensations. The talk will conclude with preliminary results from studies using hip and knee exoskeletons to enhance the mobility of elderly individuals in real-world scenarios.
Robert D. Gregg IV received the B.S. degree in electrical engineering and computer sciences from the University of California, Berkeley in 2006 and the M.S. and Ph.D. degrees in electrical and computer engineering from the University of Illinois at Urbana-Champaign in 2007 and 2010, respectively. He joined the University of Michigan as an Associate Professor in the Department of Electrical Engineering and Computer Science and the Robotics Institute in Fall 2019, and he became the Associate Director of Robotics in Fall 2020. He joined the Department of Robotics upon its establishment in July 2022. Prior to joining U-M, he was an Assistant Professor in the Departments of Bioengineering and Mechanical Engineering at the University of Texas at Dallas. Dr. Gregg directs the Locomotor Control Systems Laboratory, which conducts research on the control mechanisms of bipedal locomotion with applications to wearable and autonomous robots. He is a recipient of the Eugene McDermott Endowed Professorship, NSF CAREER Award, NIH Director’s New Innovator Award, and Burroughs Wellcome Fund Career Award at the Scientific Interface. Dr. Gregg is a Senior Member of the IEEE. https://locolab.robotics.umich.edu/
Probabilistic Robotics 2.0: Leveraging Differentiability and Parallelism for Diversity in Planning and Perception under Uncertainty
Much has been said about the need for diversity in robotics. From diverse datasets for training large vision-action models to diverse planners that can infer multi-modal trajectories, the word diversity has been a common theme in the last few years of robotics research. But how do we define or even measure diversity in robotics? In this talk, I will provide a probabilistic interpretation for diversity and show that modern tools designed for deep learning such as differentiable programming languages and parallel computation in GPUs can be conveniently utilized for large-scale probabilistic inference that naturally captures the notion of diversity. Specifically, I will describe a powerful nonparametric inference method that uses both differentiability and parallelism to provide nonparametric posterior approximations for problems such as model predictive control, motion planning, state estimation, simulator parameter estimation and more. Finally, I will define diversity in trajectory planning in terms of a new mathematical tool–signature transforms–and how it can lead to novel planning methods in the future.
Fabio Ramos is a Professor in robotics and machine learning at the School of Computer Science at the University of Sydney and a Principal Research Scientist at NVIDIA. He received the BSc and MSc degrees in Mechatronics Engineering at University of Sao Paulo, Brazil, and the PhD degree at the University of Sydney, Australia. His research focuses on statistical machine learning techniques for large-scale Bayesian inference and decision making with applications in robotics, mining, environmental monitoring and healthcare. Between 2008 and 2011 he led the research team that designed the first autonomous open-pit iron mine in the world. He has over 150 peer-review publications and received Best Paper Awards and Student Best Paper Awards at several conferences including International Conference on Intelligent Robots and Systems (IROS), Australasian Conference on Robotics and Automation (ACRA), European Conference on Machine Learning (ECML), and Robotics Science and Systems (RSS).
Towards Unified Robotics Manipulation via Object-centric Policy
Enabling robots to manipulate everyday objects is a key focus in embodied intelligence research. This area presents challenges due to the diversity of tasks and the varying shapes and structures of objects. An ideal robotic manipulation policy should provide a unified representation that is adaptable to different objects and tasks. In this presentation, we introduce the concept of object-centric manipulation policy through affordance learning, which offers a unified way to represent policies that are suitable for a wide range of objects and tasks. By leveraging affordance, we decouple the relationship between robots and objects, enabling different robots to perform the same tasks using the same action trajectories.
Hao Dong is a BOYA Assistant Professor at School of Computer Science, Peking University, where he leads PKU-Agibot Lab. His current research focuses on embodied AI, robotics and computer vision. The goal is to find the scaling law to create a cost-effective and autonomous robot system, not limited to industrial applications and home assistance scenarios, aiming to make AI benefits a global scale.
Additionally, Hao serves as an Area Chair or Senior Program Committee member for CVPR, NeurIPS and AAAI conferences, and as the Associate Editor of ICRA and Machine Intelligence Research. He received the MIR Outstanding Associate Editor Award. Also, he has been involved in open source AI system for a long time, leading several open source projects, such as TensorLayer and OpenMLsys, and has won the Best Open Source Software Award at ACM Multimedia, as well as the OpenI Outstanding Project Award twice.
Before joining PKU, Hao obtained his Ph.D. degree from Imperial College London under the supervision of Yike Guo. Prior to his Ph.D., Hao received a MSc degree with distinction from Imperial, and a first-class BEng degree from the University of Central Lancashire. Furthermore, he has founded a startup focused on AI-driven hardware between 2012 and 2015.
Composable Optimization for Simulation, Motion Planning, and Control
Contact interactions are pervasive in real-world robotics tasks like manipulation and walking. However, the non-smooth dynamics associated with impacts and friction remain challenging to model, and motion planning and control algorithms that can fluently and efficiently reason about contact remain elusive. In this talk, I will share recent work from my research group that takes an “optimization-first” approach to these challenges: collision detection, physics, motion planning, state estimation, and control are all posed as constrained optimization problems. We then build a set of algorithmic and numerical tools that allow us to flexibly compose these optimization sub-problems to solve complex robotics tasks involving discontinuous, unplanned, and uncertain contact mechanics.
Zac Manchester is an Assistant Professor of Robotics at Carnegie Mellon University. He holds a Ph.D. in aerospace engineering and a B.S. in applied physics from Cornell University. Zac was a postdoc in the Agile Robotics Lab at Harvard University and previously worked at Stanford, NASA Ames Research Center and Analytical Graphics, Inc. He received a NASA Early Career Faculty Award in 2018, a Google Faculty Research Award in 2019, and has led four satellite missions. His research interests include motion planning, control, and numerical optimization, particularly with application to robotic locomotion and spacecraft guidance, navigation, and control.
From Data Structure, Physics, and Human Knowledge: A Manifold of Robotic Geometries
To be deployed in our everyday life, robots must display outstanding learning and adaptation capabilities allowing them to act, react, and continuously learn in unstructured dynamic environments. In addition, robots should display such capabilities in real time, which entails the ability to continuously learn from small numbers of demonstrations and/or interactions. In this context, the quality and efficiency of robot learning approaches may be improved via the introduction of inductive bias. In this talk, I will view inductive bias through the lens of geometry, which is ubiquitous in robotics. Specifically, I will discuss via three examples how geometry-based inductive bias can be introduced into robot learning from data structures, from physics, and from human knowledge. First, I will show that the performance of various algorithms may be improved by considering the intrinsic geometric characteristics of data. Second, I will discuss how the dynamic properties of humans and robots are straightforwardly accounted for by considering physics-based geometric configuration spaces. Finally, I will show that imposing an additional geometric structure to latent spaces allows us to learn low-dimensional representations of robotics taxonomies in continuous domains.
Noémie Jaquier is a postdoctoral researcher working with Prof. Tamim Asfour in the High Performance Humanoid Technologies Lab (H²T) at the Karlsruhe Institute of Technology (KIT) and currently a visiting postdoctoral scholar working with Prof. Oussama Khatib at the Stanford Robotics Lab. She obtained a PhD from the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland in 2020. In 2019, she did a 6-months PhD sabbatical in the Bosch Center for Artificial Intelligence (BCAI), Germany. She obtained a Master in Robotics and Autonomous Systems and a Minor in Computational Neurosciences from EPFL in 2016 and a Bachelor in Microengineering from EPFL in 2014. Noémie’ s research brings a novel Riemannian perspective to robot learning, optimization, and control by leveraging Riemannian geometry as inductive bias and as a theory to provide sound theoretical guarantees. She investigates data-efficient methods that build on geometric spaces and exploit the geometric information naturally arising in robotic data. Her work focuses on skills learning via human demonstrations and adaptation techniques with geometry as a cornerstone. It spans various applications in the field of robot manipulation.
The Multi-objective shortest-path problem: State-of-the-art algorithms with application to robot motion planning
Many robotic motion-planning algorithms discretize the continuous configuration space into a roadmap, which is a graph where vertices correspond to configurations and edges correspond to local transitions. Graph-search algorithms are then used to compute collision-free paths on this roadmap. For instance, Dijkstra's algorithm and A* are used to compute minimum-cost paths between given vertices and LPA* is used when the graph undergoes dynamic changes (e.g., when edges are found to be in-collision, or the roadmap is incrementally densified). In this talk I will discuss another type of graph-search algorithms that solve the more general multi-objective shortest-path problem (MOSP). In MOSP, each edge is endowed with several costs corresponding to different objectives (e.g., distance and safety) and we wish to compute the Pareto-Frontier: the set of paths for which it's impossible to improve one objective without worsening at least one other objective. will describe how MOSP can be used in many robot motion-planning problems as well as the state-of-the-art algorithmic approaches for MOSP problems.
Oren Salzman is an assistant Professor at the Computer Science department at the Technion - Israel Institute of Technology. His research focuses on revisiting classical computer science algorithms, tools and paradigms to address the computational challenges that arise when planning motions for robots. Combining techniques from diverse domains such as computational geometry, graph theory and machine learning, he strives to provide efficient algorithms with rigorous analysis for robot systems with many degrees of freedom moving in tight quarters. Oren completed his PhD in the School of Computer Science at Tel Aviv University under the supervision of Prof. Dan Halperin. He then continued his studies as a postdoctoral researcher at Carnegie Mellon University working with Siddhartha Srinivasa and Maxim Likhachev and as a research scientist at the National Robotics Engineering Center (NREC). Oren has published over sixty peer-reviewed conference and journal papers. He has received the best paper and best student paper in ICAPS 18 and ICAPS 19, respectively as well as a nomination for the best-paper award at RSS 21.
From Automation to Collaboration: Repositioning Robotics in Construction
Stefana Parascho is a researcher, architect, and educator whose work lies at the intersection of architecture, digital fabrication and computational design. She is currently an Assistant Professor at EPFL where she founded the Lab for Creative Computation (CRCL).
Through her research, she has explored multi-agent fabrication methods, and their relationship to architecture. Her current research focuses on human-robot collaborative processes and the relationship between robotic construction and the built environment. Her goal is to strengthen the interdisciplinary nature of the field by increasing accessibility of digital tools and connecting technical research with societal aspects.
Before joining EPFL, Stefana was an Assistant Professor at Princeton University, where she led the CREATE Lab Princeton. She completed her doctorate in 2019 at ETH Zurich, Gramazio Kohler Research. Previously, she received her Diploma in Architectural Engineering from the University of Stuttgart and worked with DesignToProduction Stuttgart and Knippers Helbig Advanced Engineering.
Optimality and Guarantees in Robotics Exploration
Guaranteeing effective exploration is a vital component in the success of robotic applications in ocean and space exploration, environmental monitoring, and search and rescue tasks. This talk presents a novel formulation of exploration that permits optimality criteria and performance guarantees for robotic exploration tasks. We define the problem of exploration as a coverage problem on continuous (infinite-dimensional) spaces based on ergodic theory and derive control methods that satisfy optimality and guarantees such as asymptotic coverage, set-invariance, time-optimality, and reachability in exploration tasks. Last, we demonstrate successful execution of the approach on a range robotic systems.
Ian Abraham is an Assistant Professor in Mechanical Engineering with courtesy appointment in the Computer Science Department at Yale University. His research group is focused on developing real-time optimal control methods for data-efficient robotic learning and exploration. Before joining Yale, he was a postdoctoral researcher at the Robotics Institute at Carnegie Mellon University in the Biorobotics Lab. He received his PhD. and M.S. degrees in Mechanical Engineering from Northwestern University and the B.S. degree in Mechanical and Aerospace Engineering from Rutgers University. During his Ph.D. he also worked at the NVIDIA Seattle Robotics Lab where he worked on robust model-based control for large parameter uncertainty. His research interest lies at the intersection of robotics, optimal control, and machine learning with a focus on developing real-time embedded software for exploration and learning. He is the recipient of the 2023 Best Paper Award at the Robotics: Science and Systems conference, the 2019 King-Sun Fu IEEE Transactions on Robotics Best Paper award, the Northwestern Belytschko Outstanding Research award for his dissertation, and the 2023 NSF CAREER award.
Next-Generation Robot Perception: Hierarchical Representations, Certifiable Algorithms, and Self-Supervised Learning
Reinforcement Learning for Real World Problems
Reinforcement Learning for Real World Problems Talk abstract: Learning methods such as deep reinforcement learning have shown success in solving simulated planning and control problems but struggle to produce diverse, intelligent behaviour on systems that interact in the real world (robots). This talk aims to discuss these limitations, provide methods to overcome them and enable agents capable of training autonomously, becoming learning and adapting systems that require little supervision while performing diverse tasks. The talk will cover new Sim2Real methods that are more robust, Offline RL methods for longer planning tasks and how to enable training larger-scale learning systems for planning agents.
Glen Berseth is an assistant professor at the Université de Montréal, a core academic member of Mila, Canada CIFAR AI chair, and co-director of the Robotics and Embodied AI Lab (REAL). He was a Postdoctoral Researcher with Berkeley Artificial Intelligence Research (BAIR) working in the Robotic AI & Learning (RAIL) lab with Sergey Levine. Glen completed his NSERC-supported Ph.D. in Computer Science at the University of British Columbia in 2019, where he worked with Michiel van de Panne. His previous and current research has focused on solving sequential decision-making problems (planning) for real-world autonomous learning systems (robots). The specific of his research has covered the areas of human-robot collaboration, reinforcement, continual-, meta-, multi-agent, and hierarchical learning. Dr. Berseth has published across the top venues in robotics, machine learning, and computer animation in his work. He also teaches a course on robot learning at Université de Montréal and Mila, covering the most recent research on machine learning techniques for creating generalist robots.
Smart 3D Microtechnologies for Biology and Human Health
In the future, it is envisioned that living systems will be integrated with sensors and actuators to enable two-way information transfer. Also, future medicine will incorporat miniaturized machines with mechanisms for shape change, self-latching and smart robotic intervention. I will describe our research on next generation 3D microstructured materials and mechanized devices for interfacing cells, organoids, and humans. These microtechnologies feature three dimensionality, shape-change, biomolecular programmability, mechanical compliance and integrated sensing and actuation of widespread relevance to biomedical engineering, human health, diagnostics, drug delivery and surgery. The technologies leverage ultrathin and biocompatible materials, self-folding and transfer printing processing paradigms and heterogenous and hybrid materials integration. They utilize mechanisms such as the triggered release of residual stress or differential swelling to power and enable functions such as gripping, self-latching, stimuli responsive locomotion and 2D to 3D reconfigurability. Applications include theragrippers, microinjectors, organoid intelligence, biointerfacing tattoos, programmable soft robots, and single cell manipulation tools.
David Gracias is a Professor at the Johns Hopkins University with a primary appointment in the Whiting School of Engineering and secondary appointments in the Krieger School of Arts and Sciences and the Johns Hopkins School of Medicine. Prof. Gracias received his PhD from UC Berkeley and did post-doctoral research at Harvard University prior to starting his independent laboratory. He has made pioneering contributions to micro and nanotechnology as described in over 200 technical publications, including several in high impact journals such as Science. He is also a prolific inventor and holds 36 issued US patents, with notable inventions on microchip integration, self-folding polyhedra, integrated biosensors, programmable soft-robots and untethered microgrippers. He is an elected Fellow of diverse international scientific and engineering societies, including AAAS, IEEE, APS, RSC, and AIMBE.
Geometric Methods for Robot Learning: From Models to Representations
As robots become more high-dimensional and complex, and are asked to perform more challenging manipulation tasks in unstructured dynamic environments, the limits of traditional model-based robot planning and control are becoming more apparent. Efforts to augment traditional methods with, for example, models for friction, deformation, contact, external disturbances and noise, have had only limited success. Instead, there is growing optimism that by collecting large amounts of data -- from a combination of trials and simulations -- and applying machine learning methods -- deep learning, reinforcement learning -- traditional models can be bypassed entirely and replaced by a neural network. As of yet such optimism is premature; models and representations remain essential to effectively leveraging learning methods to real robotics problems. In this talk we highlight three case studies in which methods from differential geometry and Lie groups play a central role in robot model and representation learning. First, exploiting a connection between the properties of a rigid body (link masses are positive, and inertia tensors are positive-definite) and the geometry of a certain Riemannian manifold, a set of robust algorithms are derived for estimating accurate models of humanoids and other complex high-dof robots, even when the measurements are noisy and incomplete. Second, methods for constructing accurate low-dimensional representations of a robot's task-specific configuration space (latent space) are developed using the coordinate-free methods of harmonic mapping theory. Finally, a general method for constructing neural network models that are equivariant with respect to arbitrary symmetry groups -- such neural networks are important in vision-based manipulation, for example -- is presented.
Frank C. Park received the B.S. degree in EECS from MIT in 1985, and the Ph.D. degree in applied mathematics from Harvard in 1991. After joining the faculty of the University of California Irvine in 1991, since 1995 he has been professor of mechanical engineering at Seoul National University with a joint appointment in the SNU Graduate School of Data Science. He is a fellow of the IEEE and has held adjunct faculty positions with the NYU Courant Institute, the Interactive Computing Department at Georgia Tech, and the HKUST Robotics Institute in Hong Kong. His research interests span robot mechanics, planning and control, vision and image processing, mathematical data science, and related areas of applied mathematics. He is a former editor-in-chief of the IEEE Transactions on Robotics, developer of the EDX course Robot Mechanics and Control I-II, and coauthor (with Kevin Lynch) of the textbook Modern Robotics: Mechanics, Planning, and Control (Cambridge University Press, 2017). He is president of the IEEE Robotics and Automation Society for 2022–2023, and founder and CEO of Saige Research, an industrial AI company specializing in inspection and quality control.
Miniature Bio-Inspired Robots: Rigid to Compliant
As robotics researchers, we try to build highly mobile, efficient, and robust robots similar to living organisms in terms of locomotion performance. Achieving high-performance locomotion becomes more of a challenge as the size scale of the robot decreases. We observe miniature robots’ biological counterparts, i.e., insects and small animals, having extraordinary locomotion capabilities such as running, jumping, and climbing robustly over various terrains. Despite the recent advances in the field of miniature robotics, the design and capabilities of miniature robots are still limited due to the unavailability of fabrication methods, the rigidity of our mechanical structures, and our poor grasp of the physics behind miniature robot locomotion. This talk addresses these challenges, focusing on the mechanical design and fabrication of Bilkent miniature robots, the effects of their structural compliance on robot locomotion, and the modeling efforts conducted to understand locomotion at the miniature scale.
Dr. Onur Özcan creates bio-inspired miniature robots through research in mechanical engineering and robotics interface. He received his B.S. (2007) in Mechatronics Engineering at Sabanci University and his M.S. (2010) and Ph.D. (2012) in Mechanical Engineering at Carnegie Mellon University in Pittsburgh, Pennsylvania, USA, where he worked on the control and automation of tip-directed nanoscale fabrication. As a postdoctoral fellow, he conducted research on the fabrication and control of miniature crawling robots at Harvard University’s School of Engineering and Applied Sciences and the Wyss Institute for Biologically Inspired Engineering from April 2012 to January 2015. Following his postdoctoral position, he joined Bilkent University Mechanical Engineering Department as an Assistant Professor in January 2015. He leads the Bilkent Miniature Robotics Lab and is active in research in miniature and soft robotics fields. He runs several Tubitak (The national science foundation of Turkey)-funded projects on miniature and soft robots. He is an associate editor for the soft robotics field in IEEE Robotics and Automation Letters. He received the Young Scientist Award from The Science Academy in Turkey in 2023.
Where to Trust and How to Adapt Learned Models for Motion Planning
The world outside our labs seldom conforms to the assumptions used to train our models. No matter how powerful our simulators or how big our datasets, our models will sometimes be wrong because they will eventually encounter states or environments that are outside the distribution of the training data. This talk will present our recent work on determining where learned models can be trusted and how to adapt learned models to new scenarios. Focusing on dynamics models and sampling distributions for trajectory optimization, these methods are designed for motion planning for a wide range of robotic systems. Our methods provide statistical guarantees on where learned dynamics models can be trusted and strong empirical performance on adapting sampling distributions for trajectory optimization to environments that are radically different from those used in training.
Dmitry Berenson is an Associate Professor in Electrical Engineering and Computer Science and the Robotics Institute at the University of Michigan, where he has been since 2016. Before coming to University of Michigan, he was an Assistant Professor at WPI (2012-2016). He received a BS in Electrical Engineering from Cornell University in 2005 and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011, where he was supported by an Intel PhD Fellowship. He was also a post-doc at UC Berkeley (2011-2012). He has received the IEEE RAS Early Career Award and the NSF CAREER award. His current research focuses on robotic manipulation, robot learning, and motion planning.
Theseus: A Library for Differentiable Nonlinear Optimization
Theseus is an efficient application-agnostic open source library (https://github.com/facebookresearch/t...) for differentiable nonlinear least squares optimization built on PyTorch, providing a common framework for end-to-end structured learning in robotics and vision. In this talk, I will cover its application-agnostic differentiable components, such as second-order optimizers, standard costs functions, and Lie groups that together enable several applications. Then I’ll dive into efficiency features like sparse solvers, automatic vectorization, batching, GPU acceleration, and implicit differentiation that provide significant efficiency gains and scalability. Finally, I’ll summarize the community reception since its release, the research it has already enabled, and the potential for future research.
Mustafa Mukadam is a Research Scientist at Meta AI (FAIR). His work focuses on fundamental and applied research in robotics and machine learning, and structured techniques at their intersection towards practical robot learning. Specifically, his research spans problems from perception to planning for navigation and manipulation. He received a Ph.D. from Georgia Tech where he was part of the Robot Learning Lab and Institute for Robotics and Intelligent Machines. His works have been covered by media outlets like GeekWire, VentureBeat, and TechCrunch, and work on motion planning has received the 2018 IJRR paper of the year award.
Soft Materials Mechanics for Health and Sustainability
Polymers and water are the major components that constitute most living species on the earth, ranging from animals, plants, and fungi to bacteria. Polymers are also pervasive and indispensable in almost every aspect of our daily life, ranging from food, clothing, housing, and healthcare to transportation, communication, and entertainment. Furthermore, over 6% of global electricity generated from coal is used to make plastics, and microplastics are already ubiquitous in global biosphere. Intrigued by their ubiquity and impacts, MIT Zhao Lab is focused on the study and development of soft materials and systems mainly constituted of polymers and water. In this talk, I will first discuss a general strategy to design new soft materials that possess extreme physical, chemical, and biological properties – via bio-inspired and rational design of unconventional polymer networks. Then I will illustrate the impacts of soft materials mechanics on health and sustainability with examples including soft robots that treat strokes under remote control and wearable devices that image deep organs over days. I will propose two challenges in fundamental science and technology:
- Can we image the full human body over days to months continuously?
- Can we edit the full human body with micro-robots minimally invasively?
I will conclude the talk with a vision for the future development and impacts of soft materials and systems – aided by and synergized with modern technologies such as artificial intelligence, synthetic biology, and precision medicine.
Xuanhe Zhao is a Professor of Mechanical Engineering at MIT. The mission of Zhao Lab is to advance science and technology between humans and machines for addressing grand societal challenges in health and sustainability. A major focus of Zhao Lab is the study and development of soft materials and systems. Dr. Zhao is a Clarivate Highly Cited Researcher (2018, 2021-Now). He is a co-founder of SanaHeal Inc, a startup company translating bioadhesive technology for clinical applications. Over ten patents from Zhao Lab have been licensed by companies and contributed to FDA-approved and widely-used medical devices.
Robotics and Bioengineering for Minimally Invasive Surgery and Targeted Therapy
Starting from a recent analysis of the state of the art, the speaker will introduce the main challenges of surgical robotics, paying attention to the problem of generating therapeutic effects with minimally invasive solutions. She will present case-studies coming from her 25 years of experience in the field. In this research path, contributions from Robotics and Bioengineering will be highlighted.
ARIANNA MENCIASSI (IEEE Fellow) received the M.Sc. degree in physics from the University of Pisa, Pisa, Italy, in 1995, and the Ph.D. degree in bioengineering from Scuola Superiore Sant’Anna (SSSA), Pisa, Italy, in 1999. She is currently a Professor of Bioengineering and Biomedical Robotics with the SSSA, where she is a Team Leader of the “Surgical Robotics & Allied Technologies” area with The BioRobotics Institute. She has been Coordinator of the Ph.D. in BioRobotics since 2018, and she was appointed in 2019 as a Vice-Rector of SSSA. Her research interests include surgical robotics, microrobotics for biomedical applications, biomechatronic artificial organs, and smart and soft solutions for biomedical devices. She pays a special attention to the combination between traditional robotics, targeted therapy, and wireless solution for therapy (e.g., ultrasound- and magnetic-based). Prof. Menciassi served in the Editorial Board of the IEEE-ASME TRANSACTIONS ON MECHATRONICS, and was a Topic Editor of the International Journal of Advanced Robotic Systems. She has been co-chair of the IEEE Technical Committee on Surgical Robotics and Associate Editor of the IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS until the end of 2022. She is currently a co-Editor of the IEEE TRANSACTIONS ON ROBOTICS and of the APL of Bioengineering; she is also serving in the Editorial Board of the Soft Robotics journal.
Forceful systems design for "messy" field environments
For robots to perform helpful manual tasks, they must be able to physically interact with the real-world. The ability of robots to grasp and manipulate often depends on the strength and reliability of contact conditions, e.g. friction. In this talk, I will introduce how my lab is developing tools for "messy" or adversarial contact conditions - granular/rocky media, fluids, human interaction -- to support the design of more capable systems. Developing models of contact enables parametric studies that can powerfully inform mechanical design of robots. Coupled with prototyping and experimental exploration, we generate new systems that better embody desired capabilities. In particular, we are creating grippers, skins, tactile sensors, and wearables for the hands -- focusing on the point of contact. In this talk, I will draw upon recent examples including how we are (1) harnessing fluid flow in soft grippers to improve and monitor grasp state in unique ways and (2) modeling granular interaction forces to support new single and multi-agent capabilities in loose terrains.
Dr. Hannah Stuart is the Don M. Cunningham Assistant Professor in the Department of Mechanical Engineering at the University of California at Berkeley. She received her BS in Mechanical Engineering at the George Washington University in 2011, and her MS and PhD in Mechanical Engineering at Stanford University in 2013 and 2018, respectively. Her research focuses on understanding the mechanics of physical interaction in order to better design systems for dexterous manipulation. A major focus is real-world field deployment of robots, including the design of bio-inspired systems for remote exploration in challenging environments like in the ocean or on the surface of the Moon. Recent research awards include the NASA Early Career Faculty grant and Johnson & Johnson Women in STEM2D grant. She is also a current Co-Chair of the IEEE Technical Committee on Mechanisms & Design and Technical Committee on Robotic Hands, Grasping & Manipulation and an Associate Editor for the IEEE Robotics & Automation Letters.