Multimodal Fusion for Indoor Sound Source Localization

Jinhui Chen, Ryoichi Takashima, Xingchen Guo, Zhihong Zhang, Xuexin Xu, Tetsuya Takiguchi, Edwin R. Hancock

Research output: Contribution to journalArticlepeer-review


To identify the localization of indoor sound source, especially when attempted using only a single microphone, it is a challenging problem to machine learning. To address these issues, this paper presents a distinct novel solution based on fusing visual and acoustic models. Therefore, we propose two novel approaches. First, to estimate orientation of vocal object in a stable manner, we employ the visual approach as estimation model, where we develop a robust image feature representation method that adopts Fourier analysis to efficiently extract polar descriptors. Second the distance information is estimated by calculating the signal difference between transmit receive ends. To implement these, we use phoneme-level hidden Markov models (HMMs) extracted from clean speech sound, to estimate the acoustic transfer function (ATF), which can capture the speech signal as a network of phoneme HMMs. And using the separated frame sequences of the ATF, we can indicate the signal difference between two positions, which can be used to estimate the distance of sound source. Experimental results show that the proposed method can simultaneously extract the sound source parameters of direction and distance, and thus improves the verification task of sound source localization.
Original languageEnglish
Article number107906
Number of pages13
JournalPattern recognition
Early online date23 Feb 2021
Publication statusE-pub ahead of print - 23 Feb 2021

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy.


  • Sound source localization
  • acoustic transfer function
  • HMM
  • polar HOG
  • SVM

Cite this