Text2Face: 3D Morphable Faces from Text

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present the first 3D morphable modelling approach, whereby 3D face shape
can be directly and completely defined using a textual prompt. Building on work
in multi-modal learning, we extend the FLAME head model to a common imageand-text latent space. This allows for direct 3D Morphable Model (3DMM) parameter generation and therefore shape manipulation from textual descriptions.
Our method, Text2Face, has many applications; for example: generating police
photofits where the input is already in natural language. It further enables multimodal 3DMM image fitting to sketches and sculptures, as well as images.
Original languageEnglish
Title of host publicationInternational Conference on Learning Representations 2023
Subtitle of host publicationProceedings
PublisherIEEE
Number of pages7
Publication statusPublished - 1 May 2023
EventInternational Conference on Learning Representations - Kigali, Rwanda
Duration: 1 May 20235 May 2023
Conference number: 11
https://iclr.cc/Conferences/2023

Conference

ConferenceInternational Conference on Learning Representations
Abbreviated titleICLR
Country/TerritoryRwanda
CityKigali
Period1/05/235/05/23
Internet address

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the University’s Research Publications and Open Access policy.

Keywords

  • 3D morphable model
  • 3D generative face model

Cite this