Material Detail

Classification with Deep Invariant Scattering Networks

Classification with Deep Invariant Scattering Networks

This video was recorded at 26th Annual Conference on Neural Information Processing Systems (NIPS), Lake Tahoe 2012. High-dimensional data representation is in a confused infancy compared to statistical decision theory. How to optimize kernels or so called feature vectors? Should they increase or reduce dimensionality? Surprisingly, deep neural networks have managed to build kernels accumulating experimental successes. This lecture shows that invariance emerges as a central concept to understand high-dimensional representations, and deep network mysteries. Intra-class variability is the curse of most high-dimensional signal classifications. Fighting it means finding informative invariants. Standard mathematical invariants are either non-stable for signal classification or not sufficiently discriminative. We explain how convolution networks compute stable informative invariants over any group such as translations, rotations or frequency transpositions, by scattering data in high dimensional spaces, with wavelet filters. Beyond groups, invariants over manifolds can also be learned with unsupervised strategies that involve sparsity constraints. Applications will be discussed and shown on images and sounds.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.