Anthony I Tew

Anthony I Tew


Former affiliations

Personal profile

Research interests

My research focus is on the acoustics and psychoacoustics of human hearing, chiefly to deliver advances in audio technology.  In particular, we are working on:

  • ways to improve the quality and availability of binaural audio
  • binaural hearing aid algorithm development
  • improving accessibility to television content through auditory displays. 

These areas are summarised below.

1              The morphoacoustics of human hearing

Practically all external sounds reach the human auditory system via the left and right ear canals.  Despite being limited to only two channels of auditory information, we are capable of determining the direction of a sound source typically to within a few degrees.  Acoustically, this impressive performance can largely be accounted for by the auditory spatial cues of inter-aural time difference, inter-aural level difference and the pinna (outer ear) cues.  The hearing system combines these cues to create in us a sense of the sound’s direction.  The cues are embedded in a family of acoustic filters known as head-related transfer functions (HRTFs).  When the pressure variations from a sound source are input to a left HRTF the output approximates the pressure variations at the left eardrum and a right HRTF produces the corresponding pressure variations at the right eardrum.

HRTFs are created as a result of the complex shape (or morphology) of the ear flaps, the head and upper torso.  With sufficient knowledge of an individual’s morphology it is possible to calculate their associated unique set of HRTFs.  This is currently a difficult and computationally intensive process, but nevertheless may ultimately be easier than measuring them acoustically.  Finding an efficient way to estimate individualised HRTFs is viewed by many as the key to achieving widespread exploitation of 3D personal audio.  Our research is contributing to this goal in several ways:

Morphoacoustic perturbation analysis

Pioneered at York and still under development, morphoacoustic perturbation analysis (MPA) is a powerful technique for probing the relationship between human morphology and the acoustic auditory cues present in HRTFs [1].  It has been used, for example, to reveal the regions of the outer ear responsible for acoustic features such as peaks and notches.  There are exciting research questions relating to human spatial hearing and the introduction of high quality binaural audio into the consumer market which MPA has the potential to answer.

The Sydney-York Morphological and Recording-of-Ears database

The Sydney-York Morphological and Recording-of-Ears (SYMARE) database [2, 3] is the result of more than ten years’ collaboration between the Universities of York and Sydney.  The database consists of high resolution meshes describing the morphology of over 60 subjects together with measurements of their HRTFs.  The SYMARE database is an unrivalled source of physiological and acoustical data for informing and validating research in spatial audio.

2              The synthesis of binaural audio

The equivalence of live and virtual spatial audio

Methods for improving the performance of consumer-oriented sound reproduction are being researched and developed internationally.  In a laboratory environment it is possible to create sound reproduction systems which make it difficult to distinguish between a live listening experience and a virtual one.  Through headphones spatial sound may be created using binaural audio methods based on HRTFs.  Rigorously demonstrating the equivalence of a live auditory experience and a virtual one through headphones is not trivial, because blind tests in which the listener does not know whether they are listening to the virtual or to the real experience cannot be achieved directly.  We have developed several methods for tackling this problem [4, 5, 6] and these provide us with ways of performing comparisons in different situations, including those described below.

Perceptually robust simplifications

Extending high performance binaural audio from the laboratory into consumer technology has proved to be very challenging and many factors are impeding progress.  A fertile area of research is identifying simplifications which can be applied to the problem space without affecting the perceptual integrity of the resulting audio.  We are investigating simplifications in the measurement of morphology, the computation of HRTFs and in the HRTFs themselves.  Finding suitable simplifications could finally lead to 3D audio which is effective, practical and viable outside the laboratory.

Assessing quality of experience and plausibility

An exact equivalence between real and virtual auditory experiences is hard to achieve, but often it is not required and may even be undesirable.  It may be sufficient to communicate a plausible sound scene that portrays the spatial impression intended by its creator without it necessarily matching reality precisely.  Indeed, virtual auditory scenes may deliberately set out to violate physical reality for artistic reasons and in such situations it makes no sense to aim for complete realism.  Creating a plausible sound scene rather than an exact one relaxes the technical constraints which need to be met.  This greatly aids reproducing binaural audio in an uncontrolled environment where, for example, relatively little is known about the listener (e.g. their HRTFs) and their situation (e.g. the acoustic properties of their listening space).  Through our research partnership with BBC R&D [7] we are involved in identifying the key processes necessary for achieving plausible binaural audio in such circumstances.

3              Spatially informed hearing aid algorithms

The healthy human hearing system is capable of performing well in a variety of adverse acoustic conditions.  A listener who has a hearing deficit, however, even if it affects only one ear, typically finds it much more difficult to understand a conversation in the presence of competing sounds.  Binaural hearing provides the auditory system with a means of distinguishing one sound from another based on their different locations.  It also plays an important role in increasing intelligibility in the presence of room reverberation.  We are investigating a wide variety of spatial cues and evaluating their potential for improving the intelligibility of speech in challenging acoustic environments.  Our goal is to develop a binaural audio algorithm suitable for implementing in a binaural hearing aid.

4              Non-visual displays for connected television

Over recent years the face of television has altered dramatically from the provision of a small number of broadcast streams to the availability of hundreds of channels with interactive content and internet accessibility. This explosion of content has required the development of increasingly complex user interfaces.  Particularly for people with visual impairments, navigating and using this greatly increased functionality is challenging.  In this research, based in BBC R&D, auditory methods for presenting companion content which minimises the disruption to additive content, such as audio description, are being explored.

5              Research Projects

Postgraduate research projects are available, subject to funding, in the areas outlined above.  If you have a particular research topic in mind which lies somewhat outside these areas, please feel free to contact me to discuss it ( 

6              References

1.Tew, A. I., Hetherington, C. T., & Thorpe, J. B. (2012) Morphoacoustic perturbation analysis: principles and validation, Paper presented at Acoustics 2012, 23-27 April 2012, Nantes, France.
2.The Sydney-York Morphological and Recording-of-Ears (SYMARE) database
3.Jin, C., Guillon, P., Epain, N., Zolfaghari, R., van Schaik, A., Tew, A. I., & Thorpe, J. (2014). Creating the Sydney York morphological and acoustic recordings of ears database. IEEE Transactions on Multimedia16(1), 37-46.
4.Moore, A. H., Tew, A. I., & Nicol, R. (2007). Headphone transparification: a novel method for investigating the externalisation of binaural sounds. Poster session presented at 123rd AES Convention, New York, United States.
5.Moore, A. H., Tew, A. I., & Nicol, R. (2010). An initial validation of individualized crosstalk cancellation filters for binaural preceptual experiments. Journal of the Audio Engineering Society58(1/2), 36-45.
6.Satongar, D., Pike, C., Lam, Y. & Tew, A. I. (2013). On the influence of headphones on localisation of loudspeaker sources. Paper presented at 135th AES Convention, New York, United States.
7.The BBC R&D Academic Research Partnership


Tony Tew research summary/2015-02-15