By the same authors

Masked Conditional Neural Networks for Environmental Sound Classification

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Author(s)

Department/unit(s)

Publication details

Title of host publicationArtificial Intelligence XXXIV
DatePublished - Feb 2018
Pages21-33
PublisherSpringer
Original languageEnglish
ISBN (Electronic)9783319710785
ISBN (Print)9783319710778

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume10630

Abstract

The ConditionaL Neural Network (CLNN) exploits the nature of the temporal sequencing of the sound signal represented in a spectrogram, and its variant the Masked ConditionaL Neural Network (MCLNN) induces the network to learn in frequency bands by embedding a filterbank-like sparseness over the network’s links using a binary mask. Additionally, the masking automates the exploration of different feature combinations concurrently analogous to handcrafting the optimum combination of features for a recognition task. We have evaluated the MCLNN performance using the Urbansound8k dataset of environmental sounds. Additionally, we present a collection of manually recorded sounds for rail and road traffic, YorNoise, to investigate the confusion rates among machine generated sounds possessing low-frequency components. MCLNN has achieved competitive results without augmentation and using 12% of the trainable parameters utilized by an equivalent model based on state-of-the-art Convolutional Neural Networks on the Urbansound8k. We extended the Urbansound8k dataset with YorNoise, where experiments have shown that common tonal properties affect the classification performance.

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations