Motion Perception & Simulation

Motion Perception & Simulation

Perceiving how we are oriented and how we move through our surroundings is fundamental to human behavior; it allows us to anchor ourselves and to determine possibilities for interactions with the world. In the Motion Perception & Simulation group, we worked to achieve a comprehensive understanding of these percepts.

To do so, we took a two-way approach: on the one hand, we carry out fundamental research that is aimed at delineating how the brain processes multisensory stimuli to result in unified conscious experiences. On the other hand, we conducted applied research on the development of state-of-the-art motion simulation technologies. Ultimately, these two approaches built upon each other: the more detailed our knowledge on the mechanisms that govern perception, the better we knew how to achieve high-fidelity simulations, and the more realistic our simulations, the more advanced research could be conducted on motion perception.

In our experiments, we developed and used equipment that grants us the highest degree of control over the stimuli that participants experience. Over more than 25 years of research at our institute, this endeavor had culminated in unique motion simulator facilities: the CyberMotion Simulator (CMS) and the CableRobot Simulator (CRS), that allowed us to independently manipulate visual, auditory, tactile, and, most importantly for our purposes, inertial (physical) cues on motion and orientation. These simulators were dynamic motion platforms that feature cabins that can be physically moved and accommodate a person. The simulators each have specific motion capabilities, and allowed us to recreate anything from basic linear or rotational motions to formula-1 racing car or helicopter trajectories. We used the motion platforms in conjunction with visualization tools such as stereo-projectors and head-mounted displays with motion compensation to simultaneously achieve highly realistic visual stimulation on these motion platforms.

From Fundamental to Applied Research

Our fundamental research investigated both the low-level processes of uni- and multi-sensory visual/inertial motion perception, and the high-level abstract representations of self-motion, including the conscious experience of, and cognitive response to it. Low-level research allowed us to describe the relation between actual and perceived motion characteristics; whereas through high-level research we could for instance better understand the causes of motion sickness, and predict the subjective experience of motion simulation fidelity.

In our low-level fundamental research on motion perception, we determined how our brain processes motion stimuli. We measured perception in response to stimuli; we formulated algorithms to describe the data, and we determined how and where these algorithms may be implemented in the brain. To quantify perception, we combined methods that provided specific information: direct but subjective measures of perception could be obtained using so-called psychophysical methods, where participants make judgements about (relative) properties of stimuli. Examples are Forced Choice tasks, where we determined how well participants could discriminate between stimuli; Magnitude Estimation tasks, where participants provided subjective estimates of a stimulus attribute; and the Method of Adjustment, where participants reproduced stimuli. Indirect but objective measures of perception could be obtained from physiological measurements obtained with for instance eye-trackers; and to determine where in the brain certain processes occur, we could measure electrical activity in the cortex perform neuroimaging with electroencephalography (EEG) or read hemodynamic activity (i.e., blood flow) with functional Near-InfraRed Spectroscopy.

In our high-level research, we seek to determine consequences of perception, such as loss of balance, motion sickness, and qualities of conscious experience, such as perceived simulation fidelity and workload; for complex scenarios with a high level of ecological validity. As stimuli, we present for example virtual driving/flying scenarios, and we have 'played back' visual-inertial recordings of actual car driving and helicopter flight. For data collection in these experiments we have adopted questionnaires, we have adapted Magnitude Estimation methods and developed new methods (i.e., 'continuous rating').

Our applied research on simulation technologies aimed at developing simulations that are as close to reality as possible – in more professional terms, we strove to achieve high fidelity and ecologically valid simulations. To this end, we worked on the creation of photorealistic visual environments to use in our experiments; we explored ways to make optimal use of a motion simulator's capabilities by maximizing the use of the simulator workspace while taking into account knowledge of the trajectory and the physical limits of the simulator, and we investigated how we can exploit novel technologies to further increase simulation fidelity, for instance by providing active somatosensory stimulation.

Go to Editor View