Preprint manuscript of:

  • Humans ignore motion and stereo cues in favor of a fictional stable world
    Glennerster, A., Tcheang, L., Gilson, S.J., Fitzgibbon, A.W. and Parker, A.J. (2006)
    gtgfp.pdf  
    
    Abstract
    As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved (Faugeras, 1993; Hartley and Zisserman, 2000). Previous evidence shows that the human visual system accounts for the distance the observer has walked (Gogel, 1990; Bradshaw, Parton and Glennerster, 2000) and the separation of the eyes (Helmholtz, 1866; Judge and Bradford, 1988; Johnston, 1991; Brenner and van Damme, 1999) when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.

    ScienceDirect link to article (if you have a subscription).

    Some movies illustrating the experiment

    I can email a pdf of the article on request. Virtual Reality publications

    ag home