 |
Real-time gesture synthesis:
Max is able to create and execute gesture animations from MURML descriptions
of their essential spatiotemporal features, i.e. of their meaningful "stroke"
phases (see example below). To this end, an underlying anthropomorphic kinematic
skeleton for the agent was defined comprising 103 DOF in 57 joints, all subject
to realistic joint limits. This articulated body is driven in real-time by a
hierarchical gesture generation model that emphasizes the accurate and relieable
reproduction of the prescribed features. It includes two main stages:
- High-level gesture planning:
During gesture planning, the expressive phase of a gesture is defined by
setting up a fully qualified set of movement constraints. This stage includes
optionally selecting a gesture from a lexicon of abstract templates (formulated
in MURML), allocating body parts, expanding two-handed symmetrical gestures,
resolving deictic references, and defining the timing of the stroke phase.
- Motor planning and execution:
During lower-level motor planning, a solution is sought to control movements
of the agent's upper limbs that satisfy the constraints at disposal.
A kinematic model of human hand-arm movement is employed that is based on
findings from human movement science and neurophysiology.
Based on a hierarchical organization of motor control, several motor programs
for the execution of different submovements are instantiated and arranged
during planning. During execution-time, motor programs are able to activate
and complete themselves at run-time and to transfer activation to other ones.
That way, the motor programs coordinate the application of suitable motion
generation techniques to control realistic movements of the hand, the wrist,
and the arm.
The generality and flexibility of this approach allows, first,to synthesize
a great variety of gestures and, secondly, to integrate the gestural movement
precisely with external temporal constraints like synchrony with, e.g., pitch
accents in simultaneous speech.
|