Session

Session 11: Tools

Moderator: Markus Weimer



Abstract:

Chat is not available.

Thu 8 April 13:30 - 13:50 PDT

(Oral)
Swift for TensorFlow: A portable, flexible platform for deep learning

Brennan Saeta · Denys Shabalin

Swift for TensorFlow is a deep learning platform that scales from mobile devices to clusters of hardware accelerators in data centers. It combines a language-integrated automatic differentiation system and multiple Tensor implementations within a modern ahead-of-time compiled language oriented around mutable value semantics. The resulting platform has been validated through use in over 30 deep learning models and and has been employed across data center and mobile applications.

[ Paper PDF
[ ]
Thu 8 April 13:50 - 14:10 PDT

(Oral)
Amazon SageMaker Debugger: A System for Real-Time Insights into Machine Learning Model Training

Nathalie Rauschmayr · Vikas Kumar · Rahul Huilgol · Andrea Olgiati · Satadal Bhattacharjee · Nihal Harish · Vandana Kannan · Amol Lele · Anirudh Acharya · Jared Nielsen · Lakshmi Ramakrishnan · Ishan Bhatt · Kohen Chia · Neelesh Dodda · Zhihan Li · Jiacheng Gu · Miyoung Choi · Balajee Nagarajan · Jeffrey Geevarghese · Denis Davydenko · Sifei Li · Lu Huang · Edward Kim · Tyler Hill · Krishnaram Kenthapadi

Manual debugging is a common productivity drain in the machine learning (ML) lifecycle. Identifying underperforming training jobs requires constant developer attention and deep domain expertise. As state-of-the-art models grow in size and complexity, debugging becomes increasingly difficult. Just as unit tests boost traditional software development, an automated ML debugging library can save time and money. We present Amazon SageMaker Debugger, a machine learning feature that automatically identifies and stops underperforming training jobs. Debugger is a new feature of Amazon SageMaker that automatically captures relevant data during training and evaluation and presents it for online and offline inspection. Debugger helps users define a set of conditions, in the form of built-in or custom rules, that are applied to this data, thereby enabling users to catch training issues as well as monitor and debug ML model training in real-time. These rules save time and money by alerting the developer and terminating a problematic training job early.

[ Paper PDF
[ ]
Thu 8 April 14:10 - 14:30 PDT

(Oral)
FLAML: A Fast and Lightweight AutoML Library

Chi Wang · Qingyun Wu · Markus Weimer · Erkang Zhu

We study the problem of using low computational cost to automate the choices of learners and hyperparameters for an ad-hoc training dataset and error metric, by conducting trials of different configurations on the given training data. We investigate the joint impact of multiple factors on both trial cost and model error, and propose several design guidelines. Following them, we build a fast and lightweight library FLAML which optimizes for low computational resource in finding accurate models. FLAML integrates several simple but effective search strategies into an adaptive system. It significantly outperforms top-ranked AutoML libraries on a large open source AutoML benchmark under equal, or sometimes orders of magnitude smaller budget constraints.

[ Paper PDF
[ ]
Thu 8 April 14:30 - 14:50 PDT

(Oral)
To Bridge Neural Network Design and Real-World Performance: A Behaviour Study for Neural Networks

Xiaohu Tang · Shihao Han · Li Lyna Zhang · Ting Cao · Yunxin Liu

The boom of edge AI applications has spawned a great many neural network (NN) algorithms and inference platforms. Unfortunately, the fast pace of development in their fields have magnified the gaps between them. A well-designed NN algorithm with reduced number of computation operations and memory accesses can easily result in increased inference latency in real-world deployment, due to a mismatch between the algorithm and the features of target platforms.

Therefore, it is critical to understand the behaviour characteristics of NN design space on target platforms. However, none of existing NN benchmarking or characterization studies can serve this purpose. They only evaluate some sparse configurations in the design space for the purpose of platform optimization rather than the scaling in every design dimension for NN algorithm efficiency. This paper presents the first empirical study on the NN design space to learn NN behaviour characteristics on different inference platforms. The revealed characteristics can be used as guidelines to design efficient NN algorithms. We profile ten-thousand configurations from a cutting-edge NN design space on seven industrial edge AI platforms. Seven key findings as well as their causes and implications for efficient NN design are highlighted.

[ Paper PDF
[ ]