Reliable Autonomy at the Intersection of Constrained Motion Planning, Learning from Demonstration and Augmented Reality
Historically, robot automation has targeted industries that require consistency, precision, and long-term repeatability in their processes. Tasks that are dynamic or require operation in close proximity to human users reveal traditional robot controllers to be inflexible, costly, and unsafe. In response, robot Learning from Demonstration (LfD) methods enable users to teach robots through example actions, forgoing the need for programming expertise and providing a mechanism for flexibility. However, one limitation of traditional LfD methods is that they often utilize limited or context-independent information modalities, such as robot configurations, that inhibit the capture of pertinent skill information. In this talk, I will focus on methods enabling human users to communicate additional information in the form of constraints to a robot learning system that results in more robust, generalizable, and predictable skill execution. I will discuss how constraints based on abstract, high-level concepts can be integrated into existing LfD methods, how unique interfaces can further enhance the communication of such constraints, and how the grounding of these constraints requires novel constrained motion planning techniques.