By the same authors

Information Bottlenecked Variational Autoencoder for Disentangled 3D Facial Expression Modelling

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Author(s)

Department/unit(s)

Publication details

Title of host publicationWinter Conference on Applications in Computer Vision, Proceedings
DateAccepted/In press - 4 Oct 2021
PublisherIEEE
Original languageEnglish

Abstract

Learning a disentangled representation is essential to build 3D face models that accurately capture identity and expression. We propose a novel variational autoencoder (VAE) framework to disentangle identity and expression from 3D input faces that have a wide variety of expressions. Specifically, we design a system that has two decoders: one for neutral-expression faces (i.e. identity-only faces) and one for the original (expressive) input faces respectively. Crucially, we have an additional mutual-information regulariser applied on the identity part to solve the issue of imbalanced information over the expressive input faces and the reconstructed neutral faces. Our evaluations on two public datasets (CoMA and BU-3DFE) show that this model achieves competitive results on the 3D face reconstruction task and state-of-the-art results on identity-expression disentanglement. We also show that by updating to a conditional VAE, we have a system that generates different levels of expressions from semantically meaningful variables.

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details

    Research areas

  • 3D Face Modelling, 3D facial expression modelling, 3D face disentanglement

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations