Articulatory Text-to-Speech Synthesis Using the Digital Waveguide Mesh Driven by a Deep Neural Network

Amelia Jane Gully, Takenori Yoshimura, Damian Thomas Murphy, Kei Hashimoto, Yoshihiko Nankaku, Keiichi Tokuda

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Following recent advances in direct modeling of the speech
waveform using a deep neural network, we propose a novel method that directly estimates a physical model of the vocal tract from the speech waveform, rather than magnetic resonance imaging data. This provides a clear relationship between the model and the size and shape of the vocal tract, offering considerable flexibility in terms of speech characteristics such as age and gender. Initial tests indicate that despite a highly simplified physical model, intelligible synthesized speech is obtained. This illustrates the potential of the combined technique for the control of physical models in general, and hence the generation of more natural-sounding synthetic speech.
Original languageEnglish
Title of host publicationInterspeech 2017
PublisherISCA-INST SPEECH COMMUNICATION ASSOC
Pages234-238
DOIs
Publication statusPublished - 2017

Publication series

NameINTERSPEECH
ISSN (Electronic)1990-9772

Bibliographical note

© 2017 ISCA. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details

Keywords

  • speech synthesis
  • digital waveguide mesh
  • deep neural network

Cite this