Skip to yearly menu bar Skip to main content


Oral

ML-EXray: Visibility into ML Deployment on the Edge

Hang Qiu · Ioanna Vavelidou · Jian Li · Evgenya Pergament · Pete Warden · Sandeep Chinchali · Zain Asgar · Sachin Katti

Exhibit Hall A
award Outstanding Paper Award
[ ] [ Livestream: Visit Outstanding Papers ]

Abstract:

Benefited from expanding cloud infrastructure, today's neural networks have increasingly high performance trained on the cloud. Model researchers spent months of sweat competing for an extra few percent of model accuracy. However, when these models are actually deployed on edge devices in practice, very often, the performance is dropping over 10% all of a sudden without obvious reasons. The key challenge is that there is not much visibility to ML inference execution on edge devices, and very little awareness of potential issues during the edge deployment process. ML-EXray provides visibility into layer-level details of the ML execution, helps developers analyze and debug cloud-to-edge deployment issues. More often than not, the reason does not only lie in the model itself, but every operation throughout the data flow and the deployment process. Evaluations show that ML-EXray can effectively catch deployment issues, such as pre-processing bugs, quantization issues, suboptimal kernels; using ML-EXray, users need to write less than 15 line of code to fully examine the edge deployment pipeline; eradicating these issues, ML-EXray can correct model performance by up to 30%, pinpoint error-prune layers, guide users to optimize kernel execution latency by two orders of magnitude. Code and APIs will be released as a multi-lingual instrumentation library and a Python deployment validation library.

Chat is not available.