Joint Audiovisual Hidden Semi-Markov Model-based Speech Synthesis

by Dietmar Schabus, Michael Pucher, Gregor Hofer
Abstract:
This paper investigates joint speaker-dependent audiovisual Hidden Semi-Markov Models (HSMM) where the visual models produce a sequence of 3D motion tracking data that is used to animate a talking head and the acoustic models are used for speech synthesis. Different acoustic, visual, and joint audiovisual models for four different Austrian German speakers were trained and we show that the joint models perform better compared to other approaches in terms of synchronization quality of the synthesized visual speech. In addition, a detailed analysis of the acoustic and visual alignment is provided for the different models. Importantly, the joint audiovisual modeling does not decrease the acoustic synthetic speech quality compared to acoustic-only modeling so that there is a clear advantage in the common duration model of the joint audiovisual modeling approach that is used for synchronizing acoustic and visual parameter sequences. Finally, it provides a model that integrates the visual and acoustic speech dynamics.
Reference:
Dietmar Schabus, Michael Pucher, Gregor Hofer, “Joint Audiovisual Hidden Semi-Markov Model-based Speech Synthesis”, In IEEE Journal of Selected Topics in Signal Processing, vol. 8, no. 2, pp. 336-347, 2014.
Bibtex Entry:
@Article{Schabus2014a,
  Title                    = {Joint Audiovisual Hidden Semi-Markov Model-based Speech Synthesis},
  Author                   = {Dietmar Schabus and Michael Pucher and Gregor Hofer},
  Journal                  = {IEEE Journal of Selected Topics in Signal Processing},
  Year                     = {2014},

  Month                    = apr,
  Number                   = {2},
  Pages                    = {336-347},
  Volume                   = {8},

  Abstract                 = {This paper investigates joint speaker-dependent audiovisual Hidden Semi-Markov Models (HSMM) where the visual models produce a sequence of 3D motion tracking data that is used to animate a talking head and the acoustic models are used for speech synthesis. Different acoustic, visual, and joint audiovisual models for four different Austrian German speakers were trained and we show that the joint models perform better compared to other approaches in terms of synchronization quality of the synthesized visual speech. In addition, a detailed analysis of the acoustic and visual alignment is provided for the different models. Importantly, the joint audiovisual modeling does not decrease the acoustic synthetic speech quality compared to acoustic-only modeling so that there is a clear advantage in the common duration model of the joint audiovisual modeling approach that is used for synchronizing acoustic and visual parameter sequences. Finally, it provides a model that integrates the visual and acoustic speech dynamics.},
  Doi                      = {10.1109/JSTSP.2013.2281036},
  ISSN                     = {1932-4553},
  Keywords                 = {Acoustics;Hidden Markov models;Joints;Speech;Synchronization;Training;Visualization;Audiovisual speech synthesis;HMM-based speech synthesis;facial animation;hidden Markov model;speech synthesis;talking head},
}