Skip to yearly menu bar Skip to main content


Invited Talk 5
in
Workshop: Personalized Recommendation Systems and Algorithms

Pushing the Limits of Recommender Training Speed: An MLPerf Experience

Tayo Oguntebi


Abstract:

This talk will focus on practical and real-world considerations involved with maximizing training speed of deep learning recommender engines. Training deep learning recommenders at scale introduces an interesting set of challenges, because of potential imbalances in compute and communication resources in many training platforms. Our experience in benchmarking the DLRM workload for MLPerf on TensorFlow/TPUs will be used as an exemplar case. In addition, we will use the lessons learned to suggest best practices for efficient design points when tuning recommender architectures.