Timezone: »

 
Pushing the Limits of Recommender Training Speed: An MLPerf Experience
Tayo Oguntebi

Fri Apr 09 12:30 PM -- 01:00 PM (PDT) @

This talk will focus on practical and real-world considerations involved with maximizing training speed of deep learning recommender engines. Training deep learning recommenders at scale introduces an interesting set of challenges, because of potential imbalances in compute and communication resources in many training platforms. Our experience in benchmarking the DLRM workload for MLPerf on TensorFlow/TPUs will be used as an exemplar case. In addition, we will use the lessons learned to suggest best practices for efficient design points when tuning recommender architectures.

Author Information

Tayo Oguntebi (Google LLC)

More from the Same Authors

  • 2020 Oral: MLPerf Training Benchmark »
    Peter Mattson · Christine Cheng · Gregory Diamos · Cody Coleman · Paulius Micikevicius · David Patterson · Hanlin Tang · Gu-Yeon Wei · Peter Bailis · Victor Bittorf · David Brooks · Dehao Chen · Debo Dutta · Udit Gupta · Kim Hazelwood · Andy Hock · Xinyuan Huang · Daniel Kang · David Kanter · Naveen Kumar · Jeffery Liao · Deepak Narayanan · Tayo Oguntebi · Gennady Pekhimenko · Lillian Pentecost · Vijay Janapa Reddi · Taylor Robie · Tom St John · Carole-Jean Wu · Lingjie Xu · Cliff Young · Matei Zaharia
  • 2020 Poster: MLPerf Training Benchmark »
    Peter Mattson · Christine Cheng · Gregory Diamos · Cody Coleman · Paulius Micikevicius · David Patterson · Hanlin Tang · Gu-Yeon Wei · Peter Bailis · Victor Bittorf · David Brooks · Dehao Chen · Debo Dutta · Udit Gupta · Kim Hazelwood · Andy Hock · Xinyuan Huang · Daniel Kang · David Kanter · Naveen Kumar · Jeffery Liao · Deepak Narayanan · Tayo Oguntebi · Gennady Pekhimenko · Lillian Pentecost · Vijay Janapa Reddi · Taylor Robie · Tom St John · Carole-Jean Wu · Lingjie Xu · Cliff Young · Matei Zaharia