By the same authors

From the same journal

Polar Transformation on Image Features for Orientation-Invariant Representations

Research output: Contribution to journalArticle

Full text download(s)

Published copy (DOI)

Author(s)

  • Jinhui Chen
  • Zhaojie Lue
  • Zhihong Zhang
  • Faliang Huang
  • Zhiling Ye
  • Tetsuya Tagiuchi
  • Edwin R Hancock

Department/unit(s)

Publication details

JournalIEEE Transactions on Multimedia
DateAccepted/In press - 13 Jul 2018
DateE-pub ahead of print - 16 Jul 2018
DatePublished (current) - Feb 2019
Issue number2
Volume21
Number of pages12
Pages (from-to)300-313
Early online date16/07/18
Original languageEnglish

Abstract

The choice of image feature representation plays a crucial role in the analysis of visual information. Although vast numbers of alternative robust feature representation models have been proposed to improve the performance of different visual tasks, most existing feature representations (e.g. handcrafted features or Convolutional Neural Networks (CNN)) have a relatively limited capacity to capture the highly orientation-invariant (rotation/reversal) features. The net consequence is suboptimal visual performance. To address these problems, this study adopts a novel transformational approach, which investigates the potential of using polar feature representations. Our low level consists of a histogram of oriented gradient, which is then binned using annular spatial bin-type cells applied to the polar gradient. This gives gradient binning invariance for feature extraction. In this way, the descriptors have significantly enhanced orientation-invariant capabilities. The proposed feature representation, termed it orientation-invariant histograms of oriented gradients (Oi-HOG), is capable of accurately processing facial expression recognition (FER). In the context of the CNN architecture, we propose two polar convolution operations, referred to as Full Polar Convolution (FPolarConv) and Local Polar Convolution (LPolarConv), and use these to develop polar architectures for the CNN orientation-invariant representation. Experimental results show that the proposed orientation-invariant image representation, based on polar models for both handcrafted features and deep learning features, is both competitive with state-of-the-art methods and maintains a compact representation on a set of challenging benchmark image datasets.

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details

    Research areas

  • CNN, Convolution, Feature extraction, HOG, Histograms, Image representation, Robustness, Rotation-invariant and reversal-invariant representation, Task analysis, Visualization

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations