Abstract
To identify the localization of indoor sound source, especially when attempted using only a single microphone, it is a challenging problem to machine learning. To address these issues, this paper presents a distinct novel solution based on fusing visual and acoustic models. Therefore, we propose two novel approaches. First, to estimate orientation of vocal object in a stable manner, we employ the visual approach as estimation model, where we develop a robust image feature representation method that adopts Fourier analysis to efficiently extract polar descriptors. Second the distance information is estimated by calculating the signal difference between transmit receive ends. To implement these, we use phoneme-level hidden Markov models (HMMs) extracted from clean speech sound, to estimate the acoustic transfer function (ATF), which can capture the speech signal as a network of phoneme HMMs. And using the separated frame sequences of the ATF, we can indicate the signal difference between two positions, which can be used to estimate the distance of sound source. Experimental results show that the proposed method can simultaneously extract the sound source parameters of direction and distance, and thus improves the verification task of sound source localization.
Original language | English |
---|---|
Article number | 107906 |
Number of pages | 13 |
Journal | Pattern recognition |
Volume | 115 |
Early online date | 23 Feb 2021 |
DOIs | |
Publication status | E-pub ahead of print - 23 Feb 2021 |
Bibliographical note
This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy.Keywords
- Sound source localization
- acoustic transfer function
- HMM
- polar HOG
- SVM