Skip to yearly menu bar Skip to main content


Workshop

Benchmarking Machine Learning Workloads on Emerging Hardware

Tom St John · Murali Emani · Wenqian Dong

Room 241

With evolving system architectures, hardware and software stack, diverse machine learning workloads,
and data, it is important to understand how these components interact with each other. Well-defined
benchmarking procedures help evaluate and reason the performance gains with ML workload to system
mapping.
Key problems that we seek to address are: (i) which representative ML benchmarks cater to workloads
seen in industry, national labs, and interdisciplinary sciences; (ii) how to characterize the ML workloads
based on their interaction with hardware; (iii) what novel aspects of hardware, such as heterogeneity in
compute, memory, and bandwidth, will drive their adoption; (iv) performance modeling and projections
to next-generation hardware.
The workshop will invite experts in these research areas to present recent work and potential directions
to pursue. Accepted papers from a rigorous evaluation process will present state-of-the-art research
efforts. A panel discussion will foster an interactive platform for discussion between speakers and the
audience.

Chat is not available.
Timezone: America/Los_Angeles

Schedule