Trust Affordances in Human Automation Teaming
Share
tweet

The goal of this project is to explore the foundational computational and physical design constraints that facilitate robot trustworthiness. It is rooted in questions regarding the processes of trust building and trust calibration within high-risk, human-robot teaming and will involve physical human-robot experimentation, such as collaborating on a time-sensitive, safety-critical tasks, like cooperative manipulation to safely to move a heavy object. The project will enable robots to learn to adapt to and anticipate human motion and alter their own behaviors to become a safe, competent, and trustworthy teammate. Results of this project will substantially inform the science of trust between humans and autonomous systems, including providing new methods for mutual capability assessment and adaptation, informing future design guidelines for trusted automation affordances and improved transparency, and offering new insights for mixed-initiative team training.