Poster
Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference
Peter Kraft · Daniel Kang · Deepak Narayanan · Shoumik Palkar · Peter Bailis · Matei Zaharia

Mon Mar 2nd 06:30 -- 09:00 PM @ Ballroom A #13

Systems for performing ML inference are widely deployed today. However, they typically use techniques designed for conventional data serving workloads, missing critical opportunities to leverage the statistical nature of ML inference. In this paper, we present OptX, an optimizer for ML inference that introduces two statistically-motivated optimizations targeting ML applications whose performance bottleneck is feature computation. First, OptX automatically cascades feature computation. OptX classifies most data inputs using only high-value, low-cost features selected by a dataflow analysis algorithm and cost model, improving performance by up to 5x without statistically significant accuracy loss. Second, OptX accurately approximates ML top-K queries, discarding low-scoring inputs with an automatically constructed approximate model then ranking the remainder with a more powerful model, improving performance by up to 10x with minimal accuracy loss. Both optimizations automatically tune their own parameters to maximize performance while meeting a target accuracy level. OptX combines these novel optimizations with powerful compiler optimizations to automatically generate fast inference code for ML applications. We show that OptX improves the end-to-end performance of real-world ML inference pipelines curated from major data science competitions by up to 16x without statistically significant loss of accuracy.

Author Information

Peter Kraft (Stanford University)
Daniel Kang (Stanford University)
Deepak Narayanan (Stanford)
Shoumik Palkar (Stanford)
Peter Bailis (Stanford University)
Matei Zaharia (Stanford and Databricks)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors

  • 2020 Workshop: MLOps Systems »
    Debo Dutta · Matei Zaharia · Ce Zhang
  • 2020 Oral: MLPerf Training Benchmark »
    Peter Mattson · Christine Cheng · Gregory Diamos · Cody Coleman · Paulius Micikevicius · David Patterson · Hanlin Tang · Gu-Yeon Wei · Peter Bailis · Victor Bittorf · David Brooks · Dehao Chen · Debo Dutta · Udit Gupta · Kim Hazelwood · Andy Hock · Xinyuan Huang · Daniel Kang · David Kanter · Naveen Kumar · Jeffery Liao · Deepak Narayanan · Tayo Oguntebi · Gennady Pekhimenko · Lillian Pentecost · Vijay Janapa Reddi · Taylor Robie · Tom St John · Carole-Jean Wu · Lingjie Xu · Cliff Young · Matei Zaharia
  • 2020 Oral: Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc »
    Zhihao Jia · Sina Lin · Mingyu Gao · Matei Zaharia · Alex Aiken
  • 2020 Poster: Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc »
    Zhihao Jia · Sina Lin · Mingyu Gao · Matei Zaharia · Alex Aiken
  • 2020 Poster: MLPerf Training Benchmark »
    Peter Mattson · Christine Cheng · Gregory Diamos · Cody Coleman · Paulius Micikevicius · David Patterson · Hanlin Tang · Gu-Yeon Wei · Peter Bailis · Victor Bittorf · David Brooks · Dehao Chen · Debo Dutta · Udit Gupta · Kim Hazelwood · Andy Hock · Xinyuan Huang · Daniel Kang · David Kanter · Naveen Kumar · Jeffery Liao · Deepak Narayanan · Tayo Oguntebi · Gennady Pekhimenko · Lillian Pentecost · Vijay Janapa Reddi · Taylor Robie · Tom St John · Carole-Jean Wu · Lingjie Xu · Cliff Young · Matei Zaharia
  • 2020 Poster: Model Assertions for Monitoring and Improving ML Models »
    Daniel Kang · Deepti Raghavan · Peter Bailis · Matei Zaharia
  • 2020 Oral: Model Assertions for Monitoring and Improving ML Models »
    Daniel Kang · Deepti Raghavan · Peter Bailis · Matei Zaharia