Den 5/1-2026 kommer GUPEA att vara otillgängligt för alla under hela dagen.
Learning human actions on-demand based on graph theory
Abstract
With the rise of modern robotic applications such as Human-Robot Collaboration, robots are expected not only to perform predefined tasks but also to understand human behavior and intentions. This thesis develops a state-based graph representation approach for learning task structures from human demonstrations collected in Virtual Reality (VR) environments. While traditional action-based representations model actions as nodes, our approach represents environmental states as nodes and actions as edges.
We collected demonstrations from multiple participants performing manipulation tasks. Our graph construction pipeline processed demonstration data through feature extraction, error correction, state identification, and rule-based action classification. A graph merging algorithm integrated multiple demonstrations using normalized state representations to create a unified graph that captures execution variations, occasional errors, and recovery sequences. We then implemented a path planning approach on the merged graph using Dijkstra’s algorithm with edges weighted by demonstration frequency.
Results demonstrate that state-based representations can capture task structure while naturally handling variations. The merged graph reveals distinct execution patterns and represents error recovery cycles when objects are mishandled and the path planning approach finds the shortest execution paths. Comparative analysis with action-based representations shows that our approach performs better in representing action preconditions and effects, tracking task progress, and detecting errors.
Degree
Student essay
Collections
View/ Open
Date
2025-10-06Author
Le, Jiahui
Zhou, Shuman
Keywords
Robotics
Human-Robot Collaboration
Learning from Demonstration
Knowledge Representation
Ontology
Task Graph Representation
Graph Search
Language
eng