PanoLab Demo

PanoLab

IMG 3739

B4 2 PAVE Fig2c

IMG 3742 01

IMG 3727 1

IMG 3729 1

IMG 3733 1

IMG 3735 1

IMG 3742

IMG 3745

IMG 3746

screen maze 01

   

PanoLab (Spherical Display)

We have initially employed a large screen, half-cylindrical virtual reality projection system to study human perception since 1997. Studies in a variety of areas have been carried out, including spatial cognition and the perceptual control of action. In 2005, we made a number of fundamental improvements to the virtual reality system. Perhaps the most noticeable change is an alteration of the screen size and geometry. This includes extending the screen horizontally (from 180 to 230 degrees) and adding a floor screen and projector. It is important to note that the projection screen curves smoothly from the wall projection to the floor projection, resulting in an overall screen geometry that can be described as a quarter- sphere. Vertically, the screen subtends 125 degrees (25 degree of visual angle upwards and 100 degrees downwards from the normal observation position).

In 2011 the image generation and projection setup was significantly updated. The existing four JVC SX21 DILA projectors (1400x1050) and curved mirrors were replaced with six EYEVIS LED DLP projectors (1920x1200), thereby simplifying the projection setup and increasing the overall resolution. In order to compensate for the visual distortions caused by the curved projection screen, as well as to achieve soft-edge blending for seamless overlap areas, we have developed a flexible warping solution using the new warp and blend features of the NVIDIA Quadro chipsets. This solution gives us the flexibility of a hardware-based warping solution and the accuracy of a software-based warping. The necessary calibration data for the image warping and blending stages is generated by a new camera-based projector auto-calibration system (DOMEPROJECTION.COM). Image generation is handled by a new high-end render cluster consisting of six client image generation PCs and one master PC. To avoid tearing artifacts resulting from the multi-projector setup, the rendering computers use frame-synchronized graphics cards to synchronize the projected images.

In addition to improving the visual aspects of the system, we increased the quality, number, and type of input devices. Participants in the experiments can, for example, interact with the virtual environment via actuated Wittenstein helicopter controls, joysticks, a space mouse, steering wheels, a Go-Kart, or a virtual bicycle (VRBike). Furthermore a Razer Hydra 6DOF joystick can be used for wand navigation and small volume tracking. Some of the input devices offer the possibility of force-feedback. With the VRBike, for example, one can actively pedal and steer through the virtual environment, and the virtual inertia and incline will be reflected in the pedals' resistance.

Read more about the research conducted here:

Visual Self-Motion
Last updated: Monday, 01.02.2016