Vuong, Fitzgibbon and Glennerster (2019)

This paper is open access:

Vuong, J., Fitzgibbon, A.W. and Glennerster, A. (2019)
No single, stable 3D representation can explain pointing biases in a spatial updating task. Scientific Reports, 9 (1). 12578.

Abstract
People are able to keep track of objects as they navigate through space, even when objects are out of sight. This requires some kind of representation of the scene and of the observer's location but the form this might take is debated. We tested the accuracy and reliability of observers' estimates of the visual direction of previously-viewed targets. Participants viewed 4 objects from one location, with binocular vision and small head movements then, without any further sight of the targets, they walked to another location and pointed towards them. All conditions were tested in an immersive virtual environment and some were also carried out in a real scene. Participants made large, consistent pointing errors that are poorly explained by any stable 3D representation. Any explanation based on a 3D representation would have to posit a different layout of the remembered scene depending on the orientation of the obscuring wall at the moment the participant points. Our data show that the mechanisms for updating visual direction of unseen targets are not based on a stable 3D model of the scene, even a distorted one.

Raw data
Interactive site (displaying participant pointing directions).




ag home