Lifelong Dual Generative Adversarial Nets Learning in Tandem

Research output: Contribution to journalArticlepeer-review

Abstract

Continually capturing novel concepts without forgetting is one of the most critical functions sought for in artificial intelligence systems. However, even the most advanced deep learning networks are prone to quickly forgetting previously learned knowledge after training with new data. The proposed lifelong dual generative adversarial networks (LD-GANs) consist of two generative adversarial networks (GANs), namely, a Teacher and an Assistant teaching each other in tandem while successively learning a series of tasks. A single discriminator is used to decide the realism of generated images by the dual GANs. A new training algorithm, called the lifelong self knowledge distillation (LSKD) is proposed for training the LD-GAN while learning each new task during lifelong learning (LLL). LSKD enables the transfer of knowledge from one more knowledgeable player to the other jointly with learning the information from a newly given dataset, within an adversarial playing game setting. In contrast to other LLL models, LD-GANs are memory efficient and does not require freezing any parameters after learning each given task. Furthermore, we extend the LD-GANs to being the Teacher module in a Teacher-Student network for assimilating data representations across several domains during LLL. Experimental results indicate a better performance for the proposed framework in unsupervised lifelong representation learning when compared to other methods.
Original languageEnglish
Pages (from-to)1353-1365
Number of pages13
JournalIEEE Transactions on Cybernetics
Volume54
Issue number3
Early online date1 Jun 2023
DOIs
Publication statusPublished - Mar 2024

Bibliographical note

© IEEE, 2023. This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details

Cite this