In W. Horn (ed.): ECAI 2000 - Proceedings of the 14th European Conference on Artificial Intelligence (pp. 663-667).
Amsterdam: IOS Press, 2000.
Download PDF | BibTeX |
Abstract. The inclusion of additional modalities into the communicative
behavior of virtual agents besides speech has moved into focus of
human-computer interface researchers, as humans are more likely to
consider computer-generated figures lifelike when appropriate nonverbal
behaviors are displayed in addition to speech. In this paper, we propose
a knowledge-based approach for the automatic generation of gesture
animations for an articulated figure. It combines a formalism for the
representation of spatiotemporal gesture features, methods for planning
individual gestural animations w.r.t. to form and timing, and formation
of arm trajectories. Finally, enhanced methods for rendering animations
from motor programs are incorporated in the execution of planned gestures.
The approach is targetted to achieve a great variety of gestures as well
as a higher degree of lifelikeness in synthetic agents.