In order to deliver information effectively, virtual human demonstrators must be able to address complex spatial constraints and at the same time replicate motion coordination patterns observed in human-human interactions. We introduce in this paper a whole-body motion planning and synthesis framework that coordinates locomotion, body positioning, action execution and gaze behavior for generic demonstration tasks among obstacles. © 2014 Springer International Publishing Switzerland.
CITATION STYLE
Huang, Y., & Kallmann, M. (2014). Planning motions for virtual demonstrators. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8637 LNAI, pp. 190–203). Springer Verlag. https://doi.org/10.1007/978-3-319-09767-1_24
Mendeley helps you to discover research relevant for your work.