TY - JOUR
T1 - Individual differences in internal models explain idiosyncrasies in scene perception
AU - Wang, Gongting
AU - foxwell, Matthew
AU - Cichy, Radoslaw
AU - Pitcher, David James
AU - Kaiser, Daniel Sebastian
N1 - © 2024 The Author(s)
PY - 2024/4/1
Y1 - 2024/4/1
N2 - According to predictive processing theories, vision is facilitated by predictions derived from our internal models of what the world should look like. However, the contents of these models and how they vary across people remains unclear. Here, we use drawing as a behavioral readout of the contents of the internal models in individual participants. Participants were first asked to draw typical versions of scene categories, as descriptors of their internal models. These drawings were converted into standardized 3d renders, which we used as stimuli in subsequent scene categorization experiments. Across two experiments, participants' scene categorization was more accurate for renders tailored to their own drawings compared to renders based on others' drawings or copies of scene photographs, suggesting that scene perception is determined by a match with idiosyncratic internal models. Using a deep neural network to computationally evaluate similarities between scene renders, we further demonstrate that graded similarity to the render based on participants' own typical drawings (and thus to their internal model) predicts categorization performance across a range of candidate scenes. Together, our results showcase the potential of a new method for understanding individual differences – starting from participants' personal expectations about the structure of real-world scenes.
AB - According to predictive processing theories, vision is facilitated by predictions derived from our internal models of what the world should look like. However, the contents of these models and how they vary across people remains unclear. Here, we use drawing as a behavioral readout of the contents of the internal models in individual participants. Participants were first asked to draw typical versions of scene categories, as descriptors of their internal models. These drawings were converted into standardized 3d renders, which we used as stimuli in subsequent scene categorization experiments. Across two experiments, participants' scene categorization was more accurate for renders tailored to their own drawings compared to renders based on others' drawings or copies of scene photographs, suggesting that scene perception is determined by a match with idiosyncratic internal models. Using a deep neural network to computationally evaluate similarities between scene renders, we further demonstrate that graded similarity to the render based on participants' own typical drawings (and thus to their internal model) predicts categorization performance across a range of candidate scenes. Together, our results showcase the potential of a new method for understanding individual differences – starting from participants' personal expectations about the structure of real-world scenes.
U2 - 10.1016/j.cognition.2024.105723
DO - 10.1016/j.cognition.2024.105723
M3 - Article
SN - 0010-0277
VL - 245
JO - Cognition
JF - Cognition
M1 - 105723
ER -