🏆 Aaquib and Matthew's work on AR-based task guidance under uncertainty receives best paper nomination at AAMAS 2022!

We introduce characterizations of and generative algorithms for two complementary modalities of visual guidance- prescriptive guidance (visualizing recommended actions), and descriptive guidance (visualizing state space information to aid in decision-making).


Descriptive and Prescriptive Visual Guidance to Improve Shared Situational Awareness in Human-Robot Teaming

Aaquib Tabrez, Matthew Luebbers, and Bradley Hayes

In collaborative tasks involving human and robotic teammates, live communication between agents has potential to substantially improve task efficiency and fluency. Effective communication provides essential situational awareness to adapt successfully during uncertain situations and encourage informed decision-making. In contrast, poor communication can lead to incongruous mental models resulting in mistrust and failures. In this work, we first introduce characterizations of and generative algorithms for two complementary modalities of visual guidance: prescriptive guidance (visualizing recommended actions), and descriptive guidance (visualizing state space information to aid in decision-making). Robots can communicate this guidance to human teammates via augmented reality (AR) interfaces, facilitating synchronization of notions of environmental uncertainty and offering more collaborative and interpretable recommendations. We also introduce a min-entropy multi-agent collaborative planning algorithm for uncertain environments, informing the generation of these proactive visual recommendations for more informed human decision-making. We illustrate the effectiveness of our algorithm and compare these different modalities of AR-based guidance in a human subjects study involving a collaborative, partially observable search task. Finally, we synthesize our findings into actionable insights informing the use of prescriptive and descriptive visual guidance.