Material Detail

Dropout: A simple and effective way to improve neural networks

Dropout: A simple and effective way to improve neural networks

This video was recorded at 26th Annual Conference on Neural Information Processing Systems (NIPS), Lake Tahoe 2012. In a large feedforward neural network, overfitting can be greatly reduced by randomly omitting half of the hidden units on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random "dropout" gives big improvements on many benchmark tasks and sets new records for object recognition and molecular activity prediction. The Merck Molecular Activity Challenge was a contest hosted by... Show More
Rate

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.