Visual attention saccadic models: taking into account global scene context and temporal aspects of gaze behaviour
Abstract
Over the last 20 years, nearly 100 different saliency models have been proposed across a broad range of scientific communities. However, they remain quite limited when applied to natural scene exploration. Indeed, the vast majority of these models ignore two critical components of visual perception:(1) the global context of the visual scene, and (2) the sequential and time-varying aspects of overt attention. Here, we demonstrate that saccadic models allow overcoming these limitations. Saccadic models aim to predict the visual scanpath itself, i.e. the series of fixations and saccades an observer would perform to sample the visual environment. We trained our saccadic model with eye-tracking data from 6 datasets featuring different categories of visual content (static natural scenes, static webpages, dynamic landscapes and conversational videos). We show that, for a given visual category, our saccadic model tuned with the corresponding eye-tracking dataoutperforms well-established saliency models blind to visual scene semantic category. Moreover, our model provides scanpaths in close agreement with human behavior. This approach opens new avenues to tailor visual attention models for specific classes of visual stimuli or observer profiles.
Domains
Cognitive scienceOrigin | Files produced by the author(s) |
---|
Loading...