Visual Features in the Perception of Liquids
Résumé
Perceptual constancy—identifying surfaces and objects across large image changes—remains an important challenge for visual neuroscience. Liquids are particularly challenging because they respond to external forces in complex, highly variable ways, presenting an enormous range of images to the visual system. To achieve constancy, the brain must perform a causal inference that disentangles the liquid’s viscosity from external factors—like gravity and object interactions—that also affect the liquid’s behavior. Here, we tested whether the visual system estimates viscosity using “midlevel” features that respond more to viscosity than other factors. Observers reported the perceived viscosity of simulated liquids ranging from water to molten glass exhibiting diverse behaviors (e.g., pouring, stirring). A separate group of observers rated the same animations for 20 midlevel 3D shape and motion features. Applying factor analysis to the feature ratings reveals that a weighted combination of four underlying factors (distribution, irregularity, rectilinearity, and dynamics) predicted perceived viscosity very well across this wide range of contexts (R2 = 0.93). Interestingly, observers unknowingly ordered their midlevel judgments according to the one common factor across contexts: variation in viscosity. Principal component analysis reveals that across the features, the first component lines up almost perfectly with the viscosity (R2 = 0.96). Our findings demonstrate that the visual system achieves constancy by representing stimuli in a multidimensional feature space—based on complementary, midlevel features—which successfully cluster very different stimuli together and tease similar stimuli apart, so that viscosity can be read out easily.