Motion Perception & Simulation
Perceiving how we are oriented and how we move through our surroundings is fundamental to human behavior; it allows us to anchor ourselves and to determine possibilities for interactions with the world. In the Motion Perception & Simulation group, we work to achieve a comprehensive understanding of these percepts. To do so, we take a two-way approach: on the one hand, we carry out fundamental research that is aimed at delineating how the brain processes multisensory stimuli to result in unified conscious experiences. On the other hand, we conduct applied research on the development of state-of-the-art motion simulation technologies. Ultimately, these two approaches build upon each other [#]: the more detailed our knowledge on the mechanisms that govern perception, the better we know how to achieve high-fidelity simulations, and the more realistic our simulations, the more advanced research can be conducted on motion perception.
In our experiments, we develop and use equipment that grants us the highest degree of control over the stimuli that participants experience. Over more than 25 years of research at our institute, this endeavor has culminated in unique motion simulator facilities: the CyberMotion Simulator (CMS) and the CableRobot Simulator (CRS), that allow us to independently manipulate visual, auditory, tactile, and, most importantly for our purposes, inertial (physical) cues on motion and orientation. These simulators are dynamic motion platforms that feature cabins that can be physically moved and accommodate a person. The simulators each have specific motion capabilities, and allow us to recreate anything from basic linear or rotational motions to formula-1 racing car or helicopter trajectories. We use the motion platforms in conjunction with visualization tools such as stereo-projectors and head-mounted displays with motion compensation to simultaneously achieve highly realistic visual stimulation on these motion platforms.
Our fundamental research investigates both the low-level processes of uni- and multi-sensory visual/inertial motion perception, and the high-level abstract representations of self-motion, including the conscious experience of, and cognitive response to it. Low-level research allow us to describe the relation between actual and perceived motion characteristics; whereas through high-level research we can for instance better understand the causes of motion sickness, and predict the subjective experience of motion simulation fidelity.
In our low-level fundamental research on motion perception, we determine how our brain processes motion stimuli. We measure perception in response to stimuli; we formulate algorithms to describe the data, and we determine how and where these algorithms may be implemented in the brain. To quantify perception, we combine methods that provide specific information: direct but subjective measures of perception can be obtained using so-called psychophysical methods, where participants make judgements about (relative) properties of stimuli. Examples are Forced Choice tasks, where we determine how well participants can discriminate between stimuli [#]; Magnitude Estimation tasks, where participants provide subjective estimates of a stimulus attribute [#]; and the Method of Adjustment, where participants reproduce stimuli [#] . Indirect but objective measures of perception can be obtained from physiological measurements obtained with for instance eye-trackers [#]; and to determine where in the brain certain processes occur, we can measure electrical activity in the cortex perform neuroimaging with electroencephalography (EEG) or read hemodynamic activity (i.e., blood flow) with functional Near-InfraRed Spectroscopy [#].
In our high-level research, we seek to determine consequences of perception, such as (loss of) balance [#], motion sickness [#], and qualities of conscious experience, such as perceived simulation fidelity and workload; for complex scenarios with a high level of ecological validity. As stimuli, we present for example virtual driving/flying scenarios, and we have 'played back' visual-inertial recordings of actual car driving and helicopter flight. For data collection in these experiments we have adopted questionnaires, we have adapted Magnitude Estimation methods and developed new methods (i.e., 'continuous rating') [#].
Our applied research on simulation technologies aims at developing simulations that are as close to reality as possible –in more professional terms, we strive to achieve high fidelity and ecologically valid simulations. To this end, we work on the creation of photorealistic visual environments to use in our experiments [#]; we explore ways to make optimal use of a motion simulator's capabilities by maximizing the use of the simulator workspace while taking into account knowledge of the trajectory and the physical limits of the simulator [#], and we investigate how we can exploit novel technologies to further increase simulation fidelity, for instance by providing active somatosensory stimulation [#].