Adversarial 3D Face Disentanglement of Identity and Expression

Yajie Gu, N. E. Pears, Hao Sun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose a new framework to decompose 3D facial shape into identity and expression.
Existing 3D face disentanglement methods assume the presence of a corresponding neutral (i.e. identity) face for each subject. Our method designs an identity discriminator to obviate this requirement. This is a binary classifier that determines if two input faces are from the same identity, and encourages the synthesised identity face to have the same identity features as the input face and to approach the `apathy' expression.
To this end, we take advantage of adversarial learning to train a PointNet-based variational auto-encoder and discriminator. Comprehensive experiments are employed on CoMA, BU3DFE, and FaceScape datasets. Results demonstrate state-of-the-art performance with the option of operating in a more versatile application setting of no known neutral ground truths. Code is available at \url{https://github.com/rmraaron/FaceExpDisentanglement}.
Original languageEnglish
Title of host publicationInternational Conference on Automatic Face and Gesture Recognition 2023
PublisherIEEE
Publication statusAccepted/In press - 11 Sept 2022
EventInternational Conference on Automatic Face and Gesture Recognition 2023 - Waikoloa Beach Marriott Resort, Waikoloa, Hawaii, United States
Duration: 5 Jan 20238 Jan 2023
https://fg2023.ieee-biometrics.org/

Conference

ConferenceInternational Conference on Automatic Face and Gesture Recognition 2023
Abbreviated titleFG-2023
Country/TerritoryUnited States
City Waikoloa, Hawaii
Period5/01/238/01/23
Internet address

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details

Cite this