In I. Wachsmuth and M. Fröhlich (eds.):
Gesture and Sign Language in Human-Computer Interaction,
(pp. 173-184), Lecture Notes in Artificial Intelligence,
Volume 1371, Springer-Verlag, 1998.
Download PDF | BibTeX |
To recognise gestures performed by people without disabilities during verbal
communication so-called coverbal gestures a flexible system with task-
oriented design is proposed. The issue of flexibility is addressed via different
kinds of modules grasped as agents , which are grouped in different levels.
They can be easily reconfigured or rewritten to suit another application.
This system of layered agents uses an abstract body-model to transform the
up-taken data from the six-degree-of-freedom-sensors, and the data gloves,
to a first-level symbolic description of gesture features. In a first integration
step the first-level symbols are integrated to second-level symbols describing
a whole gesture. Second-level symbolic gesture descriptions are the entities
which can be integrated with speech tokens to form multi-modal utterances.