In: Broll, W & Schäfer, L. (Editors):
The Future of VR and AR Interfaces - Multimodal, Humanoid, Adaptive and Intelligent.
Proceedings of the workshop at IEEE Virtual Reality 2001, Yokohama, Japan.
GMD Report No. 138, March 2001.
Download PDF | BibTeX |
This article presents a modular approach to incorporate multimodal
gesture and speech driven interaction into virtual reality systems.
Based on existing techniques for modelling VR-applications, the overall
task is separated into different problem categories: from sensor
synchronisation to a high-level description of crossmodal temporal
and semantic coherences, a set of solution concepts is presented that
seamlessly fit into both the static (scenegraph-based) representation
and into the dynamic (renderloop and immersion) aspects of a realtime
application. The developed framework establishes a connecting layer
between raw sensor data and a general functional description of
multimodal and scenecontext related evaluation procedures for VR-setups.
As an example for the concepts, their implementation in a system for
virtual construction is described.