An algorithm has been developed which successfully detects voicing as opposed to other sounds and vocalisations. It is now being optimised for real-time data analysis. As part of this project we will work on detecting vocalizations which contain consonants (specifically, stops or nasals).
An app has been developed which produces shapes on the screen of an ipad when an infant makes a vocalisation which is voiced. The colour and movement of the shapes is random but the size of the shape changes depending on the loudness of the utterance. Images only appear during voicing (they disappear as soon as the voicing ends). The interface is currently being improved, and the aim is to make this available on the AppStore.
The aim of this project is to finish developing the algorithm for detecting consonants and to combine it with the app, to create an app that responds only to vocalizations which contain consonants.
The aim for the final app is to have clinical application as well as research uses for investigating language development in typical and atypical populations, whose babble we hope to encourage using this app. This will hopefully lead to better language outcomes for these populations (e.e., deaf infants).