📑 Matt and Aaquib's work on recency bias and perceptions of trust in human-robot interaction accepted at ICRA 2024!

Recency bias predictably impacts users' perceptions of robot task performance.


Recency Bias in Task Performance History Affects Perceptions of Robot Competence and Trustworthiness

Matthew Luebbers*, Aaquib Tabrez*, Kanaka Samagna Talanki, and Bradley Hayes

Human memory of a robot’s competence, and resulting subjective perceptions of that robot, are influenced by numerous cognitive biases. One class of cognitive bias deals with the ordering of items or interactions: information presented last among a grouping is most salient in memory formation (recency bias), followed by information presented first (primacy bias), followed by information in the middle, collectively known as the serial-position effect. For example, if a human’s last observation of a robot involves a task failure, this will disproportionately negatively alter their perception of the robot’s competence, as well as their trust in the robot moving forward. It is of value to the research community to characterize the effect of these biases and those like them within human-robot interactions to inform strategies for risk-aware planning that cultivate appropriate levels of human trust. We conducted a human-subjects study (n=53) testing the influence of the serial-position effect on recalled competence. Participants viewed videos of a robot performing the same tasks at the same level of competence, with task order differing by experimental condition (rising competence, falling competence, or failures at the midpoint), asking participants to rate robot competence in between every video as well at the very end of the experiment. We found that while the average between-video rating of robot competence remained stable across conditions, the recalled, post-experiment ratings of competence and trust were significantly lower in the condition with decreasing competence than in either of the other two conditions, suggesting a notable recency bias. We conclude with implications for human-subjects experiment design (i.e., how subjective measures are influenced by ordering effects) and provide design recommendations to minimize them. We further discuss practical applications of these results in creating riskaware robotic planners capable of trust calibration.

The full paper can be accessed here and from our Publications tab.