InfoVAEGAN : Learning Joint Interpretable Representations by Information Maximization and Maximum Likelihood

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Learning disentangled and interpretable representations is an
important step towards accomplishing comprehensive data
representations on the manifold. In this paper, we propose
a novel representation learning algorithm which combines
the inference abilities of Variational Autoencoders (VAE)
with the generalization capability of Generative Adversarial
Networks (GAN). The proposed model, called InfoVAEGAN,
consists of three networks : Encoder, Generator and
Discriminator. InfoVAEGAN aims to jointly learn discrete
and continuous interpretable representations in an unsupervised
manner by using two different data-free log-likelihood
functions onto the variables sampled from the generator’s
distribution. We propose a two-stage algorithm for optimizing
the inference network separately from the generator
training. Moreover, we enforce the learning of interpretable
representations through the maximization of the mutual information
between the existing latent variables and those created
through generative and inference processes.
Original languageEnglish
Title of host publicationProc. IEEE International Conference on Image Processing
PublisherIEEE
Number of pages5
Publication statusPublished - 20 Sept 2021

Cite this