Abstract
PCANet and its variants provided good accuracy
results for classification tasks. However, despite the importance
of network depth in achieving good classification accuracy, these
networks were trained with a maximum of nine layers. In
this paper, we introduce a residual compensation convolutional
network, which is the first PCANet-like network trained with
hundreds of layers while improving classification accuracy. The
design of the proposed network consists of several convolutional
layers, each followed by post-processing steps and a classifier.
To correct the classification errors and significantly increase the
network’s depth, we train each layer with new labels derived from
the residual information of all its preceding layers. This learning
mechanism is accomplished by traversing the network’s layers
in a single forward pass without backpropagation or gradient
computations. Our experiments on four distinct classification
benchmarks (MNIST, CIFAR-10, CIFAR-100, and TinyImageNet)
show that our deep network outperforms all existing
PCANet-like networks and is competitive with several traditional
gradient-based models.
results for classification tasks. However, despite the importance
of network depth in achieving good classification accuracy, these
networks were trained with a maximum of nine layers. In
this paper, we introduce a residual compensation convolutional
network, which is the first PCANet-like network trained with
hundreds of layers while improving classification accuracy. The
design of the proposed network consists of several convolutional
layers, each followed by post-processing steps and a classifier.
To correct the classification errors and significantly increase the
network’s depth, we train each layer with new labels derived from
the residual information of all its preceding layers. This learning
mechanism is accomplished by traversing the network’s layers
in a single forward pass without backpropagation or gradient
computations. Our experiments on four distinct classification
benchmarks (MNIST, CIFAR-10, CIFAR-100, and TinyImageNet)
show that our deep network outperforms all existing
PCANet-like networks and is competitive with several traditional
gradient-based models.
Original language | English |
---|---|
Title of host publication | International Joint Conference on Neural Networks |
Publisher | IEEE Computer Society |
Publication status | Published - 18 Jun 2023 |