Lifelong Mixture of Variational Autoencoders

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we propose an end-to-end lifelong learning mixture of experts. Each expert is implemented by a Variational Autoencoder (VAE). The experts in the mixture system are jointly trained by maximizing a mixture of individual
component evidence lower bounds (MELBO) on the loglikelihood of the given training samples. The mixing coefficients in the mixture model, control the contributions of each expert in the global representation. These are sampled from a Dirichlet distribution whose parameters are determined through nonparametric estimation during the lifelong learning. The model can learn new tasks fast when these are similar to those previously learnt. The proposed Lifelong mixture of VAE (LMVAE) expands its architecture with new components when learning a completely new task. After the training, our model can automatically determine the relevant expert to be used when
fed with new data samples. This mechanism benefits both the memory efficiency and the required computational cost as only one expert model is used during the inference. The L-MVAE inference model is able to perform interpolations in the joint
latent space across the data domains associated with different tasks and is shown to be efficient for disentangled learning representation.
Original languageEnglish
Pages (from-to)461-474
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume34
Issue number1
Early online date9 Aug 2021
DOIs
Publication statusPublished - 1 Jan 2023

Bibliographical note

© IEEE 2021. This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details

Cite this