Authors
- Elliott Hauser*
- Yao-Cheng Chan*
- Geethika Hemkumar*
- Daksh Dua*
- Parth Chonkar*
- Efren Mendoza Enriquez*
- Tiffany Kao*
- Shikhar Gupta*
- Huihai Wang*
- Justin Hart*
- Reuth Mirsky*
- Joydeep Biswas*
- Junfeng Jiao*
- Peter Stone
* External authors
Venue
- TAS '23
Date
- 2023
"What's That Robot Doing Here?": Factors Influencing Perceptions Of Incidental Encounters With Autonomous Quadruped Robots.
Elliott Hauser*
Yao-Cheng Chan*
Geethika Hemkumar*
Daksh Dua*
Parth Chonkar*
Efren Mendoza Enriquez*
Tiffany Kao*
Shikhar Gupta*
Huihai Wang*
Justin Hart*
Reuth Mirsky*
Joydeep Biswas*
Junfeng Jiao*
* External authors
TAS '23
2023
Abstract
Autonomous service robots in a public setting will generate hundreds of incidental human-robot encounters, yet researchers have only recently addressed this important topic in earnest. In this study, we hypothesized that visual indicators of human control, such as a leash on a robot, would impact humans' perceptions of robots in the context of human-robot encounters. A pilot study (n = 26) and a revised study (n = 22) including semi-structured interviews (n = 21) were conducted. The interview data suggested that the presence of another human during the encounter elicited positive reactions from the participants. Counter to these interview findings, the Godspeed-based survey data yielded largely statistically insignificant results between the conditions. We interpret this as evidence that traditional HRI survey instruments focused on the perception of robot characteristics may not be suitable for incidental human-robot encounters research. We suggest that human-robot encounters can be meaningfully characterized by participants' ability or inability to answer implicit questions such as, "what is that robot doing here?". We conclude with recommendations for human-robot encounters research methods and call for research on the intelligibility and acceptability of perceived robot purpose during human-robot encounters.
Related Publications
Two desiderata of reinforcement learning (RL) algorithms are the ability to learn from relatively little experience and the ability to learn policies that generalize to a range of problem specifications. In factored state spaces, one approach towards achieving both goals is …
Robustly cooperating with unseen agents and human partners presents significant challenges due to the diverse cooperative conventions these partners may adopt. Existing Ad Hoc Teamwork (AHT) methods address this challenge by training an agent with a population of diverse tea…
We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments---as used in reinforcement learning from human feedback (RLHF)---including those used to fine tune ChatGPT and other contemporary language models. Most recent work o…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.