Exploring Expressivity and Emotion
 with Artificial Voice and Speech Technologies

Sandra Pauletto, Bruce Balentine, Christopher Pidcock, Kevin Jones, Leonardo Bottaci, Maria Aretoulaki, Jez Wells, Darren Mundy, James Balentine

Research output: Contribution to journalArticlepeer-review

Abstract

Emotion in audio-voice signals, as synthesized by text-to-speech (TTS) technologies, was investigated to formulate a theory of expression for user interface design. Emotional parameters were specified with markup tags, and the resulting audio was further modulated with post-processing techniques. Software was then developed to link a selected TTS synthesizer with an automatic speech recognition (ASR) engine, producing a chatbot that could speak and listen. Using these two artificial voice subsystems, investigators explored both artistic and psychological implications of artificial speech emotion. Goals of the investigation were interdisciplinary, with interest in musical composition, augmentative and alternative communication (AAC), commercial voice announcement applications, human–computer interaction (HCI), and artificial intelligence (AI). The work-in-progress points towards an emerging interdisciplinary ontology for artificial voices. As one study output, HCI tools are proposed for future collaboration.
Original languageEnglish
Pages (from-to)115-125
Number of pages11
JournalLogopedics Phoniatrics Vocology
Volume38
Issue number3
DOIs
Publication statusPublished - Oct 2013

Cite this