Material Detail

Focusing Human Attention on the "Right" Visual Data

Focusing Human Attention on the "Right" Visual Data

This video was recorded at IEEE International Conference on Multimedia & Expo (ICME), Melbourne 2012. Widespread visual sensors and unprecedented connectivity have left us awash with visual data‐‐‐from online photo collections, home videos, news footage, medical images, or surveillance feeds. Which images and videos among them warrant human attention? This talk focuses on two problem settings in which this question is critical: supervised learning of object categories, and unsupervised video summarization. In the first setting, the challenge is to sift through candidate training images and select those that, if labeled by a human, would be most informative to the recognition system. In the second, the challenge is to sift through a long‐running video and select only the essential parts needed to summarize it for a human viewer. I will present our recent research addressing these problems, including novel algorithms for large‐scale active learning and egocentric video synopses for wearable cameras. Both domains demonstrate the importance of "semi‐automating" certain computer vision tasks, and suggest exciting new applications for large‐scale visual analysis.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.