By the same authors

Environmental Sound Recognition Using Masked Conditional Neural Networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Author(s)

Department/unit(s)

Publication details

Title of host publicationAdvanced Data Mining and Applications - 13th International Conference, ADMA 2017, Proceedings
DateAccepted/In press - 19 Aug 2017
DateE-pub ahead of print - 5 Nov 2017
DatePublished (current) - 5 Nov 2017
Pages373-385
Number of pages13
EditorsWen-Chih Peng, Wei Emma Zhang, Gao Cong, Aixin Sun, Chengliang Li
Original languageEnglish
ISBN (Electronic)978-3-319-69179-4

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10604 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Abstract

Neural network based architectures used for sound recognition are usually adapted from other application domains, which may not harness sound related properties. The ConditionaL Neural Network (CLNN) is designed to consider the relational properties across frames in a temporal signal, and its extension the Masked ConditionaL Neural Network (MCLNN) embeds a filterbank behavior within the network, which enforces the network to learn in frequency bands rather than bins. Additionally, it automates the exploration of different feature combinations analogous to handcrafting the optimum combination of features for a recognition task. We applied the MCLNN to the environmental sounds of the ESC-10 dataset. The MCLNN achieved competitive accuracies compared to state-of-the-art convolutional neural networks and hand-crafted attempts.

    Research areas

  • Boltzmann machine, CLNN, CRBM, Conditional RBM, Conditional neural network, DNN, Deep neural network, ESR, Environmental sound recognition, MCLNN, Masked Conditional neural network, RBM

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations