Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Oral
Mon Mar 02 01:45 PM -- 02:10 PM (PST) @ Ballroom A
Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference
Peter Kraft · Daniel Kang · Deepak Narayanan · Shoumik Palkar · Peter Bailis · Matei Zaharia

Systems for performing ML inference are widely deployed today. However, they typically use techniques designed for conventional data serving workloads, missing critical opportunities to leverage the statistical nature of ML inference. In this paper, we present OptX, an optimizer for ML inference that introduces two statistically-motivated optimizations targeting ML applications whose performance bottleneck is feature computation. First, OptX automatically cascades feature computation. OptX classifies most data inputs using only high-value, low-cost features selected by a dataflow analysis algorithm and cost model, improving performance by up to 5x without statistically significant accuracy loss. Second, OptX accurately approximates ML top-K queries, discarding low-scoring inputs with an automatically constructed approximate model then ranking the remainder with a more powerful model, improving performance by up to 10x with minimal accuracy loss. Both optimizations automatically tune their own parameters to maximize performance while meeting a target accuracy level. OptX combines these novel optimizations with powerful compiler optimizations to automatically generate fast inference code for ML applications. We show that OptX improves the end-to-end performance of real-world ML inference pipelines curated from major data science competitions by up to 16x without statistically significant loss of accuracy.