Material Detail

Graphical Models for Speech Recognition: Articulatory and Audio-Visual Models

Graphical Models for Speech Recognition: Articulatory and Audio-Visual Models

This video was recorded at Machine Learning Summer School (MLSS), Chicago 2009. Since the 1980s, the main approach to automatic speech recognition has been using hidden Markov models (HMMs), in which each state corresponds to a phoneme or part of a phoneme in the context of the neighboring phonemes. Despite their crude approximation of the speech signal, and the large margin for improvement still remaining, HMMs have proven difficult to beat. In the last few years, there has been increasing interest in more complex graphical models for speech recognition, involving multiple streams of states. I will describe two such approaches, one modeling pronunciation variation as the result of the "sloppy" behavior of articulatory variables (the states of the lips, tongue, etc.) and the other modeling... Show More
Rate

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Browse...

Disciplines with similar materials as Graphical Models for Speech Recognition: Articulatory and Audio-Visual Models
People who viewed this also viewed

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.
hidden