TY - JOUR
T1 - Lifelong Generative Adversarial Autoencoder
AU - Ye, Fei
AU - Bors, Adrian Gheorghe
N1 - © IEEE, 2023. This is an author-produced version of the published paper. Uploaded in accordance with the University’s Research Publications and Open Access policy.
PY - 2024/10
Y1 - 2024/10
N2 - Lifelong learning describes an ability that enables humans to continually acquire and learn new information without forgetting. This capability, common to humans and animals, has lately been identified as an essential function for an artificial intelligence system aiming to learn continuously from a stream of data during a certain period of time. However, modern neural networks suffer from degenerated performance when learning multiple domains sequentially, and fail to recognize past learnt tasks after being retrained. This corresponds to catastrophic forgetting and is ultimately induced by replacing the parameters associated with previously learnt tasks with new values. One approach in lifelong learning is the Generative Replay Mechanism (GRM) that trains a powerful generator as the generative replay network, implemented by a Variational Autoencoder (VAE) or a Generative Adversarial Networks (GANs). In this paper, we study the forgetting behaviour of GRM-based learning systems by developing a new theoretical framework in which the forgetting process is expressed as an increase in the model's risk during the training. Although many recent attempts have provided high-quality generative replay samples by using GANs, they are limited to mainly downstream tasks due to the lack of inference. Inspired by the theoretical analysis while aiming to address the drawbacks of existing approaches, we propose the Lifelong Generative Adversarial Autoencoder (LGAA). LGAA consists of a generative replay network and three inference models, each addressing the inference of a different type of latent variable. The experimental results show that LGAA learns novel visual concepts without forgetting and can be applied to a wide range of downstream tasks.
AB - Lifelong learning describes an ability that enables humans to continually acquire and learn new information without forgetting. This capability, common to humans and animals, has lately been identified as an essential function for an artificial intelligence system aiming to learn continuously from a stream of data during a certain period of time. However, modern neural networks suffer from degenerated performance when learning multiple domains sequentially, and fail to recognize past learnt tasks after being retrained. This corresponds to catastrophic forgetting and is ultimately induced by replacing the parameters associated with previously learnt tasks with new values. One approach in lifelong learning is the Generative Replay Mechanism (GRM) that trains a powerful generator as the generative replay network, implemented by a Variational Autoencoder (VAE) or a Generative Adversarial Networks (GANs). In this paper, we study the forgetting behaviour of GRM-based learning systems by developing a new theoretical framework in which the forgetting process is expressed as an increase in the model's risk during the training. Although many recent attempts have provided high-quality generative replay samples by using GANs, they are limited to mainly downstream tasks due to the lack of inference. Inspired by the theoretical analysis while aiming to address the drawbacks of existing approaches, we propose the Lifelong Generative Adversarial Autoencoder (LGAA). LGAA consists of a generative replay network and three inference models, each addressing the inference of a different type of latent variable. The experimental results show that LGAA learns novel visual concepts without forgetting and can be applied to a wide range of downstream tasks.
U2 - 10.1109/TNNLS.2023.3281091
DO - 10.1109/TNNLS.2023.3281091
M3 - Article
SN - 2162-237X
VL - 35
SP - 14684
EP - 14698
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 10
ER -