Perception & Action in Virtual Environments

In the Perception and Action in Virtual Environments research group, our aim was to investigate human behavior, perception and cognition using ecologically valid and immersive virtual environments. Virtual reality (VR) equipment enables our scientists to provide sensory stimulus in a controlled virtual world and to manipulate or alter sensory input that would not be possible in the real world. More specifically, VR technology enables us to specifically manipulate the visual body, the contents of the virtual world, and the sensory stimulus (visual, vestibular, kinesthetic, tactile, and auditory) while performing or viewing an action.

Our group focuses on several different areas, all areas involve measuring human performance in complex everyday tasks, i.e. spatial judgments, walking, driving, communicating and spatial navigation. We investigate the impact of having an animated self-avatar on spatial perception, the feeling of embodiment or agency, and on the ability for two people to effectively communicate. We are also interested in the impact of other avatars on human performance, human emotion perception and learning/training. Additionally, we are very interested in the visual and bodily control of locomotion and reaching tasks. Finally, we are interested in spatial navigation and memory of learned spatial layouts of small and large spaces. Our goal is to use state-of-the-art virtual reality technology to better understand how humans perceive sensory information and form an understanding of, remember their experiences in and act in the surrounding world. We use HMDs, large screen displays, motion simulators and sophisticated treadmills in combination with real-time rendering and control software and tools in order to immerse our participants in a virtual world. We use many different experimental design methods, i.e. psychophysical methods, adaptation and dual task paradigms, well established and novel performance measures of behavioral tasks and fMRI.

Main research areas

Visual body influences perception:
Seeing a virtual avatar in the virtual environment influences egocentric distance estimates. If this avatar is a self-animated avatar, egocentric distances are even more influenced (Mohler, Presence, 2010). Eye-height influences egocentric space and dimension estimates in virtual environments (Leyrer, APGV 2011). Seeing a virtual character (self or other) impacts subsequent performance of common tasks in virtual environments (McManus, supervised by Mohler, APGV 2011). The size of visual body parts (hands/arm length) influences size and distance estimates in virtual worlds (Linkenauger, ECVP and VSS 2011). These results taken together argue that the body plays a central role in the perception of our surrounding environment.

The role of visual body information in human interaction and communication:
Current state-of-the-art in motion capture tracking enables scientists to animate avatars with multiple participant’s body motion in real time. We have used this technology to conduct experiments investigating the role of body language on successful communication and interaction. We have found that body language is important for successful communication in a word-communication task and that both the speaker’s and the listener’s body movements (as seen through animated avatars) impacts communication (Dodds, CASA, 2010). We have further shown that people move more if they are wearing the xSens Moven suits and using large-screen projection technology as compared to when they are wearing Vicon rigid body tracking objects and viewing the virtual world in a low field-of-view head-mounted display (Dodds, PLoS One 2011). We have also investigated the role of the visual information of the interaction partner on task performance in a table-tennis paradigm. We have shown that the social context (competitive or cooperative) mediates the use of visual information about the interaction partner (Streuber, EBR 2011). We have also used motion capture technology to investigate the use of VR for medical training (Alexandrova CASA, 2011) and the emotional expression of body language (Volkova, IMRF, 2011).

Self-motion perception while walking and reaching:
We have conducted studies to investigate the sensory contribution to encoding walking velocity (visual, vestibular, proprioceptive, efferent copy) and have found a new measure for self-motion perception: active pointing trajectory (Campos, PLoS One, 2009). We have further demonstrated that imagined walking is different than physical walking, in that participants point in a way that indicates that they are not simulating all of their sensory information for walking when imagining walking. Additionally, we have investigated human’s ability to detect when they are walking on a curved path and the influence of walking speed on curvature sensitivity. We have found that walking speed does influence curvature sensitivity, showing that when walking at a slower velocity people are less sensitive to walking on a curve. We exploit this perceptual knowledge and designed a dynamic gain controller for redirected walking, which enables participants to walk unaided in a virtual city (Neth, IEEE-VR 2011). Finally, we have investigated motor learning in for reaching given different viewpoints and different visual realism of the arm and environment and make suggestions for the use of VR for rehabilitation and motor-learning experiments (Shomaker, Tesch, Buelthoff & Bresciani, EBR 2011).

Spatial perception and cognition:
Visiting Prof. Roy Ruddle investigated the role of body-based information on spatial navigation. He found that walking improves humans cognitive map in large virtual worlds (Ruddle, ToCHI 2011) and he investigated the role of body-based information and landmarks on route knowledge (Ruddle, Memory & Cognition 2011). We have also found that pointing to locations within one’s city of residence relies on a single north-oriented reference frame likely learned from maps [Frankenstein, PsychScience in press]. Without maps available navigators primarily memorize a novel space as local interconnected reference frames corresponding to a corridor or street [Meilinger 2010 and Hensen, supervised by Meilinger 2011 Cog Sci,]. Consistent with these results, entorhinal grid cells in humans quickly remap their grid orientation after changing the surrounding environment (Pape, supervised by Melinger SfN 2011). Additionally, we have found that egocentric distance estimates are also underestimated in large screen displays, and are influenced by the distance to the screen (Alexandrova, APGV 2010).

Go to Editor View