The following is a preprint of:

View-based modelling of human visual navigation errors (2011) Pickup, L.C., Fitzgibbon, A.W., Gilson, S.J., and Glennerster, A. IEEE IVMSP 2011, Ithaca, New York, 135-140

Abstract
View-based and Cartesian representations provide rival accounts of visual navigation in humans, and here we explore possible models for the view-based case. A visual 'homing' experiment was undertaken by human participants in immersive virtual reality. The distributions of end-point errors on the ground plane differed significantly in shape and extent depending on visual landmark configuration and relative goal location. A model based on simple visual cues captures important characteristics of these distributions. Augmenting visual features to include 3D elements such as stereo and motion parallax result in a set of models that describe the data accurately, demonstrating the effectiveness of a view-based approach.

Science direct copy of article if you have a subscription or email me (a.glennerster "at" reading.ac.uk) but in this case the preprint is essentially the same.
Lyndsey home
ag home