This video was recorded at 2nd Joint Workshop on Multimodal Interaction and Related Machine Learning Algorithms, Edinburgh 2005. In this paper, we report on the infrastructure we have de- veloped to support our research on multimodal cues for understanding meetings.With our focus on multimodality, we investigate the interaction among speech, gesture, posture, and gaze in meetings. For this purpose, a high quality multimodal corpus is being produced.
You are being taken to the material on
another site. This will open a new window.
Rate this Material
You just viewedVACE Multimodal Meeting Corpus.
Please take a moment to rate this material.
Search by ISBN?
It looks like you have entered an ISBN number. Would you like to search using what you have
entered as an ISBN number?
Searching for Members?
You entered an email address. Would you like to search for members? Click Yes to continue. If no, materials will be displayed first. You can refine your search with the options on the left of the results page.