Material Detail
1 Billion Instances, 1 Thousand Machines and 3.5 Hours
This video was recorded at NIPS Workshops, Whistler 2009. Training conditional maximum entropy models on massive data sets requires significant computational resources, but by distributing the computation, training time can be significant reduced. Recent theoretical results have demonstrated conditional maximum entropy models trained by weight mixtures of independently trained models converge at the same rate as traditional distributed schemes, but significantly faster. This efficiency is achieved primarily by reducing network communication costs, a cost not usually considered but actually quite crucial.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info