Abstract
Existing machine learning systems are trained to adapt to a single database and their ability to acquire additional information is limited. Catastrophic forgetting occurs in all deep learning systems when attempting to train them with additional databases. The information learnt previously is forgotten and no longer recognized when such a learning systems is trained using a new database. In this paper, we develop a new image generation approach defined under the lifelong learning framework which prevents forgetting. We employ the mutual information maximization between the latent variable space and the outputs of the generator network in order to learn interpretable representations, when learning using the data from a series of databases sequentially. We also provide the theoretical framework for the generative replay mechanism, under the lifelong learning setting. We perform a series of experiments
showing that the proposed approach is able to learn a set of disjoint data distributions in a sequential manner while also capturing meaningful data representations across domains
showing that the proposed approach is able to learn a set of disjoint data distributions in a sequential manner while also capturing meaningful data representations across domains
Original language | English |
---|---|
Title of host publication | Proc. Int. Conf. on Image Processing, Theory, Tools and Applications (IPTA) |
Place of Publication | Paris, France |
Publisher | IEEE |
Number of pages | 6 |
DOIs | |
Publication status | Published - 10 Nov 2020 |