Material Detail

Multimodal Integration for Meeting Group Action Segmentation and Recognition

Multimodal Integration for Meeting Group Action Segmentation and Recognition

This video was recorded at 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms, Edinburgh 2005. We address the problem of segmentation and recognition of sequences of multimodal human interactions in meetings. These interactions can be seen asa rough structure of a meeting, and can be used either as input for a meeting browser or as a first step towards a higher semantic analysis of the meeting. A common lexicon of multimodal group meeting actions, a shared meeting data set, and a common evaluation procedure enable us to compare the different approaches. We compare three different multimodal feature sets and four modelling infrastructures: a higher semantic feature approach, multi-layer HMMs, a multistream DBN, as well as a multi-stream mixed-state DBN for disturbed data.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.