Abstract
Current video classification approaches suffer from catastrophic forgetting when they are retrained on new databases. Continual learning aims to enable a classification system with learning from a succession of tasks without forgetting. In this paper we propose to use a transformer-based video class incremental learning model. During a succession of learning steps, at each training time, the transformer is used to extract characteristic spatio-temporal features from videos corresponding to a set of classes. When new video classification tasks become available, we train new classifier modules with the transformer-extracted features, gradually building a mixture model. The proposed methodology enables continual class learning in videos without being required to consider the learning of an initial set of classes, leading to low computation and memory requirements. The proposed model is evaluated on standard action recognition datasets including UCF101 and HMDB51, which are split into sets of classes, to be learnt sequentially. Our proposed method significantly outperforms the baselines on all datasets. Index Terms-Continual video classification, video transformer, video class incremental learning.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Image Processing (ICIP) |
Place of Publication | Abu Dhabi, UAE |
Publisher | IEEE |
Pages | 1295-1301 |
Number of pages | 7 |
ISBN (Electronic) | 979-8-3503-4939-9 |
ISBN (Print) | 979-8-3503-4940-5 |
DOIs | |
Publication status | Published - 30 Oct 2024 |