Binaural summation of amplitude modulation involves weak interaural suppression

Daniel Hart Baker, Greta Vilidaite, Elizabeth McClarnon, Elena Valkova, Aurelio Massimo Bruno, Rebecca E. Millman

Research output: Contribution to journalArticlepeer-review

Abstract

The brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a ‘dipper’ shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing.
Original languageEnglish
Article number3560
Number of pages14
JournalScientific Reports
Volume10
DOIs
Publication statusPublished - 26 Feb 2020

Bibliographical note

© The Author(s) 2020

Cite this