Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Wed Mar 04 07:00 AM -- 03:30 PM (PST) @ Level 3 Room 9
Software-Hardware Codesign for Machine Learning Workloads
Ritwik Gupta · John Wohlbier · Tze Meng Low · Jeffrey Vetter · Natalia Vassilieva





Workshop Home Page

Machine learning development workflows today involve the siloed design and optimization of task-specific software for a limited number of fixed hardware options. As a result, hardware and software are seen as individual components where the impact of either SW or HW on each other cannot be optimized or assessed jointly. This abstraction leads to computationally inefficient machine learning workloads.

Recently, both software and hardware have taken steps to become more domain specific. Machine learning focused software libraries provide operations and abstractions limited to workload-relevant use cases. Hardware makers have started manufacturing workload-relevant chips in the form of FPGAs, ASICs, and DLAs. However, these efforts are still largely independent of each other, resulting in inefficiencies and less-than-ideal workload performances.

Ideally, hardware and software would be codesigned for a specific ML workload, but investing in a particular hardware design is costly, especially in the face of the rapidly evolving state of ML. This workshop is soliciting extended abstracts that seek to bridge the gap between software and hardware in the areas of model design, model abstractions, model primitives, workload compression, hardware design, hardware optimization for power, data flow optimization, and compiler technologies.

Welcome, Introduction, Logistics (Programmatics)
DARPA (Talk)
SambaNova (Talk)
Groq (Talk)
Graphcore (Talk)
Break
Cerebras (Talk)
Oak Ridge National Laboratory (Talk)
Carnegie Mellon University (Talk)
University of Washington (Talk)
Columbia University (Talk)
Lunch
Facebook (Talk)
Xilinx (Talk)
Break
Intel (Talk)
Arm (Talk)
Panel