By the same authors

From the same journal

From the same journal

Binaural summation of amplitude modulation involves weak interaural suppression

Research output: Contribution to journalArticlepeer-review

Full text download(s)

Published copy (DOI)

Author(s)

Department/unit(s)

Publication details

JournalScientific Reports
DateAccepted/In press - 10 Feb 2020
DatePublished (current) - 26 Feb 2020
Volume10
Number of pages14
Original languageEnglish

Abstract

The brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a ‘dipper’ shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing.

Bibliographical note

© The Author(s) 2020

Projects

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations