Material Detail

FPGA-based MapReduce Framework for Machine Learning

FPGA-based MapReduce Framework for Machine Learning

This video was recorded at NIPS Workshops, Whistler 2009. Machine learning algorithms are becoming increasingly important in our daily life. However, training on very large scale datasets is usually very slow. FPGA is a reconfigurable platform that can achieve high parallelism and data throughput. Many works have been done on accelerating machine learning algorithms on FPGA. In this paper, we adapt Google's MapReduce model to FPGA by realizing an on-chip MapReduce framework for machine learning algorithms. A processor scheduler is implemented for the maximum computation resource utilization and load balancing. In accordance with the characteristics of many machine learning algorithms, a common data access scheme is carefully designed to maximize data throughput for large scale dataset. This framework hides the task control, synchronization and communication away from designers to shorten development cycles. In a case study of RankBoost acceleration, up to 31.8x speedup is achieved versus CPU-based design, which is comparable with a fully manually designed version. We also discuss the implementations of two other machine learning algorithms, SVM and PageRank, to demonstrate the capability of the framework.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.