Material Detail

Segmentation-robust Representations, Matching, and Modeling for Sign Language Recognition

Segmentation-robust Representations, Matching, and Modeling for Sign Language Recognition

This video was recorded at Workshop on Gesture Recognition and launching of a benchmark program. Distinguishing true signs from transitional, extraneous movements made by the signer as s/he moves from one sign to the next is a serious hurdle in the design of continuous Sign Language recognition systems. This problem is further compounded by the ambiguity of segmentation and occlusions, resulting in propagation of errors to higher levels. This talk will describe our experience with representations and matching methods, particularly those that can handle errors in low-level segmentation methods and uncertainties in segmentation of signs in sentences. We have formulated a novel framework that can address both these problems (i) using a nested level-building-based dynamic programming approach, when there is dearth of training data, and (ii) using a HMM-based approach generalized to handle multiple possible observations, when we have statistical models of signs. We will also discuss an automated approach to both extract and learn models for continuous signs from continuous sentences in a weakly unsupervised manner. This can help build training data for the recognition process. Our publications that discuss these issues can be found at http://marathon.csee.usf.edu/ASL/. Disclaimer: There may be mistakes or omissions in the interpretation as the interpreters are not experts in the field of interest and performed a simultaneous translation without comprehensive preparation.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.