Learning Deep Hierarchies of Representations
This video was recorded at VideoLectures.NET - Single Lectures Series. Whereas theoretical work suggests that deep architectures might be computationally and statistically more efficient at representing highly-varying functions, training deep architectures was unsuccessful until the recent advent of algorithms based on unsupervised pre-training of each level of a hierarchically structured model. Several unsupervised criteria and procedures were proposed for this purpose, starting with the Restricted Boltzmann Machine (RBM), which when stacked gives rise to Deep Belief Networks (DBN). Although the partition function of RBMs is intractable, inference is tractable and we review several successful learning algorithms that have been proposed, in particular those using weights that change quickly during learning instead of converging. In addition to being impressive as generative models, deep architectures based on RBMs and other unsupervised learning methods have made an impact by being used to initialize deep supervised neural networks. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. We attempt to shed some light on these questions by comparing different successful approaches to training deep architectures and through extensive simulations investigating explanatory hypotheses. Finally, we describe our current research program, objectives and challenges, regarding learning representations at multiple levels of abstraction, to compare web objects such as images, documents, and search engine requests, comparisons that are at the core of several information retrieval applications.
- Comments (1) Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info
Antonio Silva Sprock (Teacher (K-12))