Parts-based Implicit 3D Face Modeling

Yajie Gu, N. E. Pears

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Previous 3D face analysis has focussed on 3D facial identity, expression and pose disentanglement. However, the independent control of different facial parts and the ability to learn explainable parts-based latent shape embeddings for implicit surfaces remain as open problems. We propose a method for 3D face modeling that learns a continuous parts-based deformation field that maps the various semantic parts of a subject's face to a template. By swapping affine-mapped facial features among different individuals from predefined regions we achieve significant parts-based training data augmentation. Moreover, by sequentially morphing the surface points of these parts, we learn corresponding latent representations, shape deformation fields, and the signed distance function of a template shape. This gives improved shape controllability and better interpretability of the face latent space, while retaining all of the known advantages of implicit surface modelling. Unlike previous works that generated new faces based on full-identity latent representations, our approach enables independent control of different facial parts, i.e. nose, mouth, eyes and also the remaining surface and yet generates new faces with high reconstruction quality.
Evaluations demonstrate both facial expression and parts disentanglement, independent control of those facial parts, as well as state-of-the art facial parts reconstruction, when evaluated on FaceScape and Headspace datasets.
Original languageEnglish
Title of host publication Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Subtitle of host publicationVISAPP 2024
EditorsY. Gu, Pears N.
PublisherSciTePress
Pages201-212
Number of pages12
Volume2
ISBN (Print)978-989-758-679-8
DOIs
Publication statusPublished - 27 Feb 2024

Publication series

NameVISIGRAPP
ISSN (Electronic)2184-4321

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the University’s Research Publications and Open Access policy.

Cite this