Authors
- Keya Ghonasgi*
- Reuth Mirsky*
- Adrian M Haith*
- Peter Stone
- Ashish D Deshpande*
* External authors
Venue
- IROS 2023
Date
- 2023
A Novel Control Law for Multi-joint Human-Robot Interaction Tasks While Maintaining Postural Coordination
Keya Ghonasgi*
Reuth Mirsky*
Adrian M Haith*
Ashish D Deshpande*
* External authors
IROS 2023
2023
Abstract
Exoskeleton robots are capable of safe torque-controlled interactions with a wearer while moving their limbs through pre-defined trajectories. However, affecting and assisting the wearer's movements while incorporating their inputs (effort and movements) effectively during an interaction remains an open problem due to the complex and variable nature of human motion. In this paper, we present a control algorithm that leverages task-specific movement behaviors to control robot torques during unstructured interactions by implementing a force field that imposes a desired joint angle coordination behavior. This control law, built by using principal component analysis (PCA), is implemented and tested with the Harmony exoskeleton. We show that the proposed control law is versatile enough to allow for the imposition of different coordination behaviors with varying levels of impedance stiffness. We also test the feasibility of our method for unstructured human-robot interaction. Specifically, we demonstrate that participants in a human-subject experiment are able to effectively perform reaching tasks while the exoskeleton imposes the desired joint coordination under different movement speeds and interaction modes. Survey results further suggest that the proposed control law may offer a reduction in cognitive or motor effort. This control law opens up the possibility of using the exoskeleton for training the participating in accomplishing complex multi-joint motor tasks while maintaining postural coordination.
Related Publications
Two desiderata of reinforcement learning (RL) algorithms are the ability to learn from relatively little experience and the ability to learn policies that generalize to a range of problem specifications. In factored state spaces, one approach towards achieving both goals is …
Robustly cooperating with unseen agents and human partners presents significant challenges due to the diverse cooperative conventions these partners may adopt. Existing Ad Hoc Teamwork (AHT) methods address this challenge by training an agent with a population of diverse tea…
We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments---as used in reinforcement learning from human feedback (RLHF)---including those used to fine tune ChatGPT and other contemporary language models. Most recent work o…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.