By the same authors

From the same journal

Deep Mixture Generative Autoencoders

Research output: Contribution to journalArticlepeer-review

Full text download(s)

Published copy (DOI)

Author(s)

Department/unit(s)

Publication details

JournalIEEE Transactions on Neural Networks and Learning Systems
DateAccepted/In press - 2021
DatePublished (current) - 19 Apr 2021
Number of pages15
Original languageEnglish

Abstract

Variational autoencoders (VAEs) are one of the most popular unsupervised generative models which rely on learning latent representations of data. In this paper, we extend the classical concept of Gaussian mixtures into the deep variational framework by proposing a mixture of VAEs (MVAE). Each component in the MVAE model is implemented by a variational encoder and has an associated sub-decoder. The separation between the latent spaces modelled by different encoders is enforced using the d-variable Hilbert-Schmidt Independence Criterion (dHSIC) criterion. Each component would capture different data variational features. We also propose a mechanism for finding the appropriate number of VAE components for a given task, leading to an optimal architecture. The differentiable categorical Gumbel-Softmax distribution is used in order to generate dropout masking parameters within the end-to-end backpropagation training framework. Extensive experiments show that the proposed MAVE model learns a rich latent data representation and is able to discover additional underlying data factors.

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations