View-symmetric representations of faces in human and artificial neural networks

Xun Zhu, David M Watson, Daniel Rogers, Timothy J Andrews

Research output: Contribution to journalArticlepeer-review

Abstract

View symmetry has been suggested to be an important intermediate representation between view-specific and view-invariant representations of faces in the human brain. Here, we compared view-symmetry in humans and a deep convolutional neural network (DCNN) trained to recognise faces. First, we compared the output of the DCNN to head rotations in yaw (left-right), pitch (up-down) and roll (in-plane rotation). For yaw, an initial view-specific representation was evident in the convolutional layers, but a view-symmetric representation emerged in the fully-connected layers. Consistent with a role in the recognition of faces, we found that view-symmetric responses to yaw were greater for same identity compared to different identity faces. In contrast, we did not find a similar transition from view-specific to view-symmetric representations in the DCNN for either pitch or roll. These findings suggest that view-symmetry emerges when opposite rotations of the head lead to mirror images. Next, we compared the view-symmetric patterns of response to yaw in the DCNN with corresponding behavioural and neural responses in humans. We found that responses in the fully-connected layers of the DCNN correlated with judgements of perceptual similarity and with the responses of higher visual regions. These findings suggest that view-symmetric representations may be computationally efficient way to represent faces in humans and artificial neural networks for the recognition of identity.

Original languageEnglish
Article number109061
Number of pages11
JournalNeuropsychologia
Volume207
Early online date7 Dec 2024
DOIs
Publication statusPublished - Jan 2025

Bibliographical note

© 2024 The Authors

Cite this