Authors
- Keya Ghonasgi*
- Reuth Mirsky*
- Adrian M Haith*
- Peter Stone
- Ashish D Deshpande*
* External authors
Venue
- IROS
Date
- 2022
Quantifying Changes in Kinematic Behavior of a Human-Exoskeleton Interactive System
Keya Ghonasgi*
Reuth Mirsky*
Adrian M Haith*
Ashish D Deshpande*
* External authors
IROS
2022
Abstract
While human-robot interaction studies are becoming more common, quantification of the effects of repeated interaction with an exoskeleton remains unexplored. We draw upon existing literature in human skill assessment and present extrinsic and intrinsic performance metrics that quantify how the human-exoskeleton system’s behavior changes over time. Specifically, in this paper, we present a new performance metric that provides insight into the system’s kinematics associated with ‘successful’ movements resulting in a richer characterization of changes in the system’s behavior. A human subject study is carried out wherein participants learn to play a challenging and dynamic reaching game over multiple attempts, while donning an upper-body exoskeleton. The results demonstrate that repeated practice results in learning over time as identified through the improvement of extrinsic performance. Changes in the newly developed kinematics-based measure further illuminate how the participant’s intrinsic behavior is altered over the training period. Thus, we are able to quantify the changes in the human-exoskeleton system’s behavior observed in relation with learning.
Related Publications
Two desiderata of reinforcement learning (RL) algorithms are the ability to learn from relatively little experience and the ability to learn policies that generalize to a range of problem specifications. In factored state spaces, one approach towards achieving both goals is …
Robustly cooperating with unseen agents and human partners presents significant challenges due to the diverse cooperative conventions these partners may adopt. Existing Ad Hoc Teamwork (AHT) methods address this challenge by training an agent with a population of diverse tea…
We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments---as used in reinforcement learning from human feedback (RLHF)---including those used to fine tune ChatGPT and other contemporary language models. Most recent work o…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.