Abstract
In this paper, we propose a new continuously learning generative
model, called the Lifelong Twin Generative Adversarial
Networks (LT-GANs). LT-GANs learns a sequence of tasks
from several databases and its architecture consists of three
components: two identical generators, namely the Teacher
and Assistant, and one Discriminator. In order to allow for
the LT-GANs to learn new concepts without forgetting, we
introduce a new lifelong training approach, namely Lifelong
Adversarial Knowledge Distillation (LAKD), which encourages
the Teacher and Assistant to alternately teach each other,
while learning a new database. This training approach favours
transferring knowledge from a more knowledgeable player to
another player which knows less information about a previously
given task.
model, called the Lifelong Twin Generative Adversarial
Networks (LT-GANs). LT-GANs learns a sequence of tasks
from several databases and its architecture consists of three
components: two identical generators, namely the Teacher
and Assistant, and one Discriminator. In order to allow for
the LT-GANs to learn new concepts without forgetting, we
introduce a new lifelong training approach, namely Lifelong
Adversarial Knowledge Distillation (LAKD), which encourages
the Teacher and Assistant to alternately teach each other,
while learning a new database. This training approach favours
transferring knowledge from a more knowledgeable player to
another player which knows less information about a previously
given task.
Original language | English |
---|---|
Title of host publication | Proc. of IEEE International Conference on Image Processing (ICIP) |
Publisher | IEEE |
Number of pages | 5 |
Publication status | Published - 20 Sept 2021 |