Visuo-vestibular function in virtual environments

Current work in the CNS Lab focuses on multisensory integration by users of virtual environments. Integration of information from visual and vestibular systems, in particular, is known to play a key role in determining both the level of immersion and the degree of comfort felt by those experiencing virtual environments. We study these and related issues in work on virtual reality.

Mark Dennison, Zack Wisti, and Mike D'Zmura investigated the link between use of a head-mounted display in viewing a virtual environment and cybersickness: motion sickness associated with use of VR technology. Changes in stomach activity (EGG), blinking, and breathing predict human subjects' reports of motion sickness. Measurement of physiological variables during both head-mounted display viewing and display monitor viewing of the virtual environment lets us distinguish effects of arousal from effects of cybersickness. A paper titled, Use of physiological signals to predict cybersickness, describes the results.

Mark Dennison and Mike D'Zmura replicated and extended an early study by Dichgans, Held, Young, and Brandt (1972). A virtual environment that rotates about the line of sight causes alterations in the perceived direction of gravity. We found that this visual stimulation, provided by a head-mounted display, leads to significant increases in motion sickness which are not accompanied by postural instability. This speaks against the postural instability account of cybersickness proposed by Stoffregen and Smart (1998). The results are described in a paper titled, Cybersickness without the wobble: experimental results speak against postural instability theory .

Mark Dennison and Mike D'Zmura studied the effects of brief perturbations of visual input on posture. Experiment participants navigated a virtual environment (VE) while standing and wearing a head mounted display (HMD) or while viewing a monitor; visual information was manipulated to provide unexpected shoves in the horizontal plane. Postural instability, as measured by a balance board, increased with time only when perturbations were present. HMD users exhibited greater sway when exposed to visual perturbations than did monitor users. Yet motion sickness increased only when an HMD was used and occurred with or without participants undergoing perturbations. Results suggest that the postural instability which is generated by unexpected visual perturbation does not necessarily increase the likelihood of motion sickness in a virtual environment. The results are described in a paper titled, Effects of unexpected visual motion on postural sway and motion sickness .

Zack Wisti has studied natural signals to the vestibular system. Inertial measurement units were used to measure body and head rotation during a variety of activities, including walking, running, walking up and down stairs, and sitting at a desk viewing either a display monitor or an HMD. Head and body movements differ substantially across these conditions. The way that the neck filters body motion differs from one condition to the next, but in all cases is well-modeled by lowpass filtering. The vestibular signals measured while one uses an HMD seated do not resemble those measured when someone performs in reality the motions performed virtually (e.g., walking, running), so confirming the importance of visuo-vestibular mismatch in generating cybersickness.


Past work in the Cognitive NeuroSystems Lab, at that time the VR Lab, includes studies of search, navigation and object recognition in virtual environments with four spatial dimensions. The work on 4D computer graphics, search and navigation was performed by Greg Seyranian, whose doctoral dissertation details the behavioral experiments on search and navigation in 4D virtual environments (pictured at left, wearing a head-mounted display); Philippe Colantoni, whose work as a postdoctoral fellow included the development of a 4D level editor; Barb Krug, whose digital art made the virtual environments come to life, and D'Zmura. The experiments showed that observers readily learn to use the action-game-like interface developed and that they can easily code landmarks and perform path-based navigation and search throughout 4D environments. A second set of experiments tested whether users can take 4D rotations into account in a manner consistent with a higher form of navigation ability, for instance, path integration or navigation based on a mental map. Some but not all learned how to take the 4D rotations into account. Ma Ge worked as a graduate student on the related question of whether 4D virtual environments like these provide enough information for an observer to infer the correct shape of 4D objects. The answer is yes: the work on 4D structure from motion shows that observers are provided enough information to recover the shape of a 4D rigid object when viewed in either orthographic projection or in perspective. Please click HERE to learn more about this work.