Deep Residual Compensation Convolutional Network without Backpropagation

Mubarakah Alotaibi*, Richard Charles Wilson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

PCANet and its variants provided good accuracy
results for classification tasks. However, despite the importance
of network depth in achieving good classification accuracy, these
networks were trained with a maximum of nine layers. In
this paper, we introduce a residual compensation convolutional
network, which is the first PCANet-like network trained with
hundreds of layers while improving classification accuracy. The
design of the proposed network consists of several convolutional
layers, each followed by post-processing steps and a classifier.
To correct the classification errors and significantly increase the
network’s depth, we train each layer with new labels derived from
the residual information of all its preceding layers. This learning
mechanism is accomplished by traversing the network’s layers
in a single forward pass without backpropagation or gradient
computations. Our experiments on four distinct classification
benchmarks (MNIST, CIFAR-10, CIFAR-100, and TinyImageNet)
show that our deep network outperforms all existing
PCANet-like networks and is competitive with several traditional
gradient-based models.
Original languageEnglish
Title of host publicationInternational Joint Conference on Neural Networks
PublisherIEEE Computer Society
Publication statusPublished - 18 Jun 2023

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details

Cite this