High-level cinematic knowledge to predict inter-observer visual congruency
Abstract
When watching the same visual stimulus, humans can exhibit a wide range of gaze behaviors. These variations can be caused by bottom-up factors (i.e. features of the stimulus itself) or top-down factors (i.e. characteristics of the observers). Inter-observer visual congruency is a measure of this range. Moreover, it has been shown that cinematic techniques, such as camera motion or shot editing, have a significant impact on this measure [17]. In this work, we first propose a metric for measuring IOC in videos, taking into account the dynamic nature of the stimuli. Then, we propose a model for predicting inter-observer visual congruency in the context of feature films, by using high-level cinematic annotation as prior information in a deep learning framework.
Domains
Artificial Intelligence [cs.AI]Origin | Files produced by the author(s) |
---|