Abstract
As the prevalence of collaborative robots increases, physical interactions between humans and robots are inevitable - presenting an opportunity for robots to not only maintain safe working parameters with humans but also learn from these interactions. To develop adaptive robots, we first aim to analyze human responses to different errors through a study in which users are asked to correct any errors that the robot makes in various tasks. With this characterization of corrections, we can treat physical human-robot interactions as informative instead of ignoring physical interactions or leaving robots to return to the originally planned behaviors when interactions end. We incorporate physical corrections into existing learning from demonstration (LfD) frameworks, which allow robots to learn new skills by observing human demonstrations. We demonstrate that learning from physical interactions can improve task-specific performance metrics. The results reveal that including information about the behavior being corrected in the update improves task performance significantly compared to adding corrected trajectories alone. In a user study with an optimal control-based LfD framework, we also find that users are able to provide less feedback to the robot after each interaction update to the robot's behavior. Utilizing corrections could enable advanced LfD techniques to be integrated into commercial applications for collaborative robots by enabling end-users to customize the robot's behavior through intuitive interactions rather than by modifying the behavior in software.
| Original language | English (US) |
|---|---|
| Journal | IEEE Transactions on Human-Machine Systems |
| DOIs | |
| State | Accepted/In press - 2025 |
All Science Journal Classification (ASJC) codes
- Human Factors and Ergonomics
- Control and Systems Engineering
- Signal Processing
- Human-Computer Interaction
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence