Intelligent Systems Lab Project: Environmental Display
Participants
- Tobias Haberkorn
- Claudius Strub
- Jens Wittrowski
Supervisors
- Florian Lier
- Denis Schulze
Motivation
-
An intelligent system, integrated in a room, may want to visualize / display pictures, videos or text
in order to interact with a person but does not have a screen of suitable orientation (i.e. perpendicular to the projector) to display on.
However there are several surfaces inside common living space, such as doors, cupboards, et cetera which could be used as
an adequate plane to project something onto with a projector. Therefore the goal is to develop a system that can recognize
such planar surface patches, distort and move the to-be-displayed image so that it appears undistorted. Since this is a complex task, the main
focus of this project is to dynamically find out where adequate planes are situated in the room, choose one and project the information on that plane
by using a micro projector.
Application Szenario
-
The following 'story' illustrates intended applications of our system:
The intelligent room or a mobile system wants to present information to the person. The room is equiped with a depth camera and a micro projector, both oriented into the same direction. It then uses the depth camera to detect suited planes near the desired position and renders an appropriately distorted visualization so that the information appears undistorted on the best suited plane. More specifically, the image is transformed and scaled to fit on the choosen plane. So the person inside the room can see the presented information in adequate size and without any distortions. To be more flexible camera and projector can be mounted on a pan-tilt unit for future versions.
Objectives
The project goals are to:- find suited planes in depth images taken from a 3D camera
- choose one of the detected planes according to optimality criteria (e.g. size, orientation, distance)
- compute transformation and projection parameters for the image by considering camera and projector factors (e.g. location, orientation, resolution, FoV)
- project the image onto the choosen plane
Description
Hardware
We used the SwissRanger SR4000 to get a depth image. It is a ToF (time-of-flight) camera and has a range up to 10m. Furthermore, we wanted to use the laser projector Microvision-SHOWWX-Pico. This tiny projector is always in focus and has a range up to 5m. With a pan-tilt unit we could construct the following system architecture.System Architecture
This image shows a stationary version of our desired system architecture, as described above. It can visualize information on every surface available in the environment.Delays in the delivery of the projector forced us to adapt our project plan to the available equipment. We use a standard projector instead of the pico-projector, which allows us to demonstrate the functioning system in a static (not pan/tilt) setup.
Software Architecture
We developed three programs which communicate via XCF. The first one (1.) is a given ICEWING plugin developed by the Applied Informatics Group, which we improved and integrated. The program grabs the images from our time-of-flight camera, detects the planar surfaces and computes their rotations and translations. The information about the rotations and translations are sent via XCF to our second program (2.). There all surfaces are sorted according to size and distance to the last choosen plane. You can switch between them to select the surface onto which you want to project. Information of this surface are sent to the last part of our architecture (3.). In addition to this, the program also receives an image which should be projected. This image gets transformed with OpenGL to fit on the selected surface (from 2.). You can also run only the first (1.) and third (3) part of the architecture to project on all detected surfaces.We implemented our programs in c++.
Results
- A static setup of projector and ToF camera is necessary but a short manually calibration of the system is possible.
- The projection can be done into a dynamic environment.
- An adequate transformation is computed for arbitrary translations as well as rotations around two axis.
- Several strategies for choosing a plane from the detected surfaces have been implemented and tested.
- Chosing the nearest surface to the camera.
- Chosing the plane with the smallest deviation from the mean position of the last three surfaces.
- Chosing the surface with the biggest diameter.
- Bugfixes in the given ICEWING plugins have been made and further unresolved problems detected
Discussion and Conclusion
In general the project did not achieve the predefined goal of a stable projecting onto arbitrary surfaces. Unfortunately, administrative problems and delays caused servere unexpected delays for the project implementation. Some problems we detected during this project have been in example:- Some bugs in the given first part such as wrong rotations of the normal vectors.
- The combination of different ICEWING plugins and a high sensitivity for the parameter configurations (which had to be adjusted frequently), require time consuming tuning for each setup.
- The detected planes have a high variance in position, size and orientation.
- The detected planes have wrong size when rotations get negative.
- Inappropriate choice of the surface vertices, which leads to a high fluctuation
Outlook
- Further work on the plane description plugin is necessary to receive correct size and rotations of the detected surfaces as well as measurements to reduce fluctuation in the vertices.
- Possible hardware updates:
- A pan-tilt unit with laser-projector and sensor would be a nice extension.
- An alternative sensor could be the Kinect (PrimeSense).