Revealing and Mitigating Harmful Assumptions and Behaviors in Human-Autonomy Teaming
Abstract: Human-autonomy teaming in complex environments continues to evolve with technological innovations like mixed reality and rapidly improving large language models. With this evolution comes a need for increased safety measures and better ways for humans to learn and understand these systems. The work presented in this dissertation aims to address questions about safety, appropriate trust, and appropriate use of autonomy by and for humans. I begin with an overview of how mixed reality is used for human-robot collaboration. I then explore how we might use augmented reality to promote safety and compliance in a shared space environment with humans and robots. This leads to the question of how we can actively warn humans about failures if autonomous chatbots. Finally I investigate the use of iteratively adding latent human knowledge to an autonomous robot's trajectory optimization as a way of improving both learning and mission outcomes. Ultimately I show that humans have a propensity to dangerously overtrust robots and other forms of autonomy, however we can mitigate this bias with certain design considerations including iteration and transparency.
Bio: Christine T. Chang is a PhD candidate in the Collaborative AI and Robotics Laboratory. Her research reveals insights into the nature of dangerous assumptions and behaviors in human-autonomy teaming and develops algorithmic techniques to mitigate the consequences of this misalignment. Prior to starting her PhD, Christine has experience as a mechanical and aerospace engineer and as a K-20 educator. She obtained her BS in Mechanical Engineering from Cornell University, an MS in STEM Education from Boise State University, and an MS in Computer Science from CU Boulder. In Fall 2024, she is starting as an IEEE Congressional Fellow, supporting legislators on Capitol Hill. When she's not in the lab you can find her running, mountain biking, skiing, and traveling.