Material Detail

Learning in Hierarchical Architectures: from Neuroscience to Derived Kernels

Learning in Hierarchical Architectures: from Neuroscience to Derived Kernels

This video was recorded at Machine Learning Summer School (MLSS), Chicago 2009. Understanding the processing of information in our cortex is a significant part of understanding how the brain works, arguably one of the greatest problems in science today. In particular, our visual abilities are computationally amazing: computer science is still far from being able to create a vision engine that imitates them. Thus, visual cortex and the problem of the computations it performs may well be a good proxy for the rest of the cortex and for intelligence itself. I will briefly review our work on developing a hierarchical feedforward architecture for object recognition based on the anatomy and the physiology of the primate visual cortex. These architectures compete with state-of-the-art computer vision systems; they mimic human performance on a specific but difficult natural image recognition task. I will sketch current work aimed at extending the model to the recognition of behaviors in time sequences of images and to accounting for attentional effects inhuman vision. I will then describe a new attempt (with S. Smale, L. Rosasco and J. Bouvrie) to develop a mathematics for hierarchical kernel machines centered around the notion of a recursively defined "derived kernel" and directly suggested by the model and the underlying neuroscience of the visual cortex.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collection (1) Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Browse...

Disciplines with similar materials as Learning in Hierarchical Architectures: from Neuroscience to Derived Kernels

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.