Temporal Transformer Encoder for Video Class Incremental Learning

Nattapong Kurpukdee, Adrian Gheorghe Bors

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Current video classification approaches suffer from catastrophic forgetting when they are retrained on new databases. Continual learning aims to enable a classification system with learning from a succession of tasks without forgetting. In this paper we propose to use a transformer-based video class incremental learning model. During a succession of learning steps, at each training time, the transformer is used to extract characteristic spatio-temporal features from videos corresponding to a set of classes. When new video classification tasks become available, we train new classifier modules with the transformer-extracted features, gradually building a mixture model. The proposed methodology enables continual class learning in videos without being required to consider the learning of an initial set of classes, leading to low computation and memory requirements. The proposed model is evaluated on standard action recognition datasets including UCF101 and HMDB51, which are split into sets of classes, to be learnt sequentially. Our proposed method significantly outperforms the baselines on all datasets. Index Terms-Continual video classification, video transformer, video class incremental learning.
Original languageEnglish
Title of host publicationIEEE International Conference on Image Processing (ICIP)
Place of PublicationAbu Dhabi, UAE
PublisherIEEE
Pages1295-1301
Number of pages7
ISBN (Electronic)979-8-3503-4939-9
ISBN (Print)979-8-3503-4940-5
DOIs
Publication statusPublished - 30 Oct 2024

Bibliographical note

©2024 IEEE. This is an author-produced version of the published paper. Uploaded in accordance with the University’s Research Publications and Open Access policy.

Cite this