Virtual Construction with Gesture and Speech
Ipke Wachsmuth
Faculty of Technology
University of Bielefeld
ipke@techfak.uni-bielefeld.de
Originating from research focuses on natural language processing, expert system techniques, and human machine communication, the AI Lab at Bielefeld found a new mission in 1995 when we started the AI and Computer Graphics Lab, centered around the VIENA and CODY projects which had both started in 1993. Our major motivation was that knowledge-based systems and further AI techniques be used to establish an intuitive communication link between humans and highly interactive 3D graphics. Realizing that Virtual Reality proves a most comprehensive communication media and a multi-modal interface to a host of multimedia applications, it was a natural step to further extend the Lab's mission to Intelligent MultiMedia and Virtual Reality. Our focus is now to build intelligent and more intuitive interfaces that yield highly interactive construction and design environments by the support of knowledge-based techniques.
The SGIM project, started in 1996, extends our efforts to include multimodal speech and gesture interfaces. Our focus lies on 'co-verbal' gestures, i.e. gestures which accompany verbal utterances more or less spontaneously, for instance, when pointing at an object ("this wheel") or when signaling turning ("around this way"). A main challenge is the coordination (temporal and semantic coupling) of the two modalities gesture and speech. Temporal coupling is also relevant for integrating the symbolic content of gestures (e.g. pointing) with the signal content (e.g., pointing vector at the moment of the maximal expression of a pointing gesture). A knowledge-based simulation system developed in our lab, the Virtual Constructor, is used to manipulate CAD models of building kit parts on a virtual assembly workbench. In an application scenario of virtual construction, gesture and speech are integrated to exploit their complementary input information.
Virtuelles Konstruieren mit Gestik und Sprache
Im Bielefelder Labor für Künstliche Intelligenz und Virtuelle Realität liegt der Forschungsschwerpunkt auf der Integration von gestischer und sprachlicher Mensch-Maschine-Kommunikation in einem Szenario des Virtuellen Konstruierens. Dabei werden hochaufgelöste räumliche Visualisierungen CAD-basierter Bauteilmodelle in realistischer Größe auf einer Großbildleinwand präsentiert und über Eingabegeräte der Virtuellen Realität (Datenhandschuhe, Positionssensoren, Spracherkennungssystem) zu komplexen Aggregaten zusammengebaut. Wissensbasierte Techniken kommen einerseits bei der Montagesimulation mit den computergraphischen Bauteilmodellen und andererseits bei der Auswertung der sprachlich-gestischen Eingaben zum Einsatz.
talks: Billund DK 11-06-98, Tübingen 29-01-99, Bielefeld 16-05-01, Paderborn 21-06-01[Fachbeitrag aus KI 2/00; pdf] - [aktuelle Folien; pdf]