MLSys 2023 Accepted Papers
Timezone:
RecD: Deduplication for End-to-End Deep Learning Recommendation Model Training Infrastructure
Storage, Scheduling, and Networking
Mark Zhao · Dhruv Choudhary · Devashish Tyagi · Ajay Somani · Max Kaplan · Sung-Han Lin · Sarunya Pumma · Jongsoo Park · Aarti Basant · Niket Agarwal · Carole-Jean Wu · Christos Kozyrakis
|
||
Practical Edge Kernels for Integer-Only Vision Transformers Under Post-training Quantization
Edge
Zining Zhang · Bingsheng He · Zhenjie Zhang
|
||
Building Verified Neural Networks for Computer Systems with Ouroboros
Correctness and Security
Cheng Tan · Changliu Liu · Zhihao Jia · Tianhao Wei
|
||
Uniform Sparsity in Deep Neural Networks
Sparsity 1: Models and Algorithms
Saurav Muralidharan
|
||
Renee: END-TO-END TRAINING OF EXTREME CLASSIFICATION MODELS
Emerging Models and Domains
Vidit Jain · Jatin Prakash · Deepak Saini · Jian Jiao · Ramachandran Ramjee · Manik Varma
|
||
GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning
Federated Learning
Shiqi He · Qifan Yan · Feijie Wu · Lanjun Wang · Mathias Lécuyer · Ivan Beschastnikh
|
||
Sparsity-Aware Memory Interface Architecture using Stacked XORNet Compression for Accelerating Pruned-DNN Models
Sparsity 2: Systems
Younghoon Byun · Seungsik Moon · Baeseong Park · Se Jung Kwon · Dongsoo Lee · Gunho Park · Eunji Yoo · Jung Gyu Min · Youngjoo Lee
|
||
PipeFisher: Efficient Training of Large Language Models Using Pipelining and Fisher Information Matrices
Parallel and Distributed Systems 1: Parallelism
Kazuki Osawa · Shigang Li · Torsten Hoefler
|
||
RevBiFPN: The Fully Reversible Bidirectional Feature Pyramid Network
Memory Optimization
Vitaliy Chiley · Vithursan Thangarasa · Abhay Gupta · Anshul Samar · Joel Hestness · Dennis DeCoste
|
||
AutoScratch: ML-Optimized Cache Management for Inference-Oriented GPUs
ML for Systems
Yaosheng Fu · Evgeny Bolotin · Aamer Jaleel · Gal Dalal · Shie Mannor · Jacob Subag · Noam Korem · Michael Behar · David Nellans
|
||
Validating Large Language Models with ReLM
Correctness and Security
Michael Kuchnik · Virginia Smith · George Amvrosiadis
|
||
Hotline Profiler: Automatic Annotation and A Multi-Scale Timeline for Visualizing Time-Use in DNN Training
Measurement and Analysis
Daniel Snider · Fanny Chevalier · Gennady Pekhimenko
|
||
ApproxCaliper: A Programmable Framework for Application-aware Neural Network Optimization
Measurement and Analysis
Yifan Zhao · Hashim Sharif · Peter Pao-Huang · Vatsin Shah · Arun Narenthiran Sivakumar · Mateus Valverde Gasparino · Abdulrahman Mahmoud · Nathan Zhao · Sarita Adve · Girish Chowdhary · Sasa Misailovic · Vikram Adve
|
||
Cupcake: A Compression Scheduler for Scalable Communication-Efficient Distributed Training
Parallel and Distributed Systems 2: Communication
Zhuang Wang · Xinyu Wu · Zhaozhuo Xu · T. S. Eugene Ng
|
||
SUBGRAPH STATIONARY HARDWARE-SOFTWARE INFERENCE CO-DESIGN
Edge
Payman Behnam · Alexey Tumanov · Tushar Krishna · Pranav Gadikar · Yangyu Chen · Jianming Tong · Yue Pan · Abhimanyu Rajeshkumar Bambhaniya · Alind Khare
|
||
Safe Optimized Static Memory Allocation for Parallel Deep Learning
Memory Optimization
Ioannis Lamprou · Zhen Zhang · Javier de Juan · Hang Yang · Yongqiang Lai · Etienne Filhol · Cedric Bastoul
|
||
SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Correctness and Security
Yan Wang · Yuhang Li · Ruihao Gong · Aishan Liu · yanfei wang · Jian Hu · Yongqiang Yao · Yunchen Zhang · tianzi xiaotian · Fengwei Yu · Xianglong Liu
|
||
Learning to Parallelize with OpenMP by Augmented Heterogeneous AST Representation
ML for Systems
Le Chen · Quazi Ishtiaque Mahmud · Hung Phan · Nesreen Ahmed · Ali Jannesari
|
||
SIRIUS: Harvesting Whole-Program Optimization Opportunities for DNNs
Compilers
YIJIN LI · Jiacheng Zhao · Sun Qianqi · Haohui Mai · Lei Chen · Wanlu Cao · Yanfan Chen · Li zhicheng · YING LIU · Xinyuan Zhang · Xiyu Shi · Jie Zhao · Jingling Xue · HUIMIN CUI · XiaoBing Feng
|
||
Virtual Machine Allocation with Lifetime Predictions
ML for Systems
Hugo Barbalho · Patricia Kovaleski · Beibin Li · Luke Marshall · Marco Molinaro · Abhisek Pan · Eli Cortez · Matheus Leao · Harsh Patwari · Zuzu Tang · Larissa Rozales Gonçalves · David Dion · Thomas Moscibroda · Ishai Menache
|
||
Breadth-First Pipeline Parallelism
Parallel and Distributed Systems 1: Parallelism
Joel Lamy-Poirier
|
||
Efficient GPU Kernels for N:M-Sparse Weights in Deep Learning
Sparsity 2: Systems
Bin Lin · Ningxin Zheng · Lei Wang · Shijie Cao · Lingxiao Ma · Quanlu Zhang · Yi Zhu · Ting Cao · Jilong Xue · Yuqing Yang · Fan Yang
|
||
Be Careful with PyPI Packages: You May Unconsciously Spread Backdoor Model Weights
Correctness and Security
Tianhang Zheng · Hao Lan · Baochun Li
|
||
Reducing Activation Recomputation in Large Transformer Models
Memory Optimization
Vijay Anand Korthikanti · Jared Casper · Sangkug Lym · Lawrence McAfee · Michael Andersch · Mohammad Shoeybi · Bryan Catanzaro
|
||
Edge Impulse: An MLOps Platform for Tiny Machine Learning
Edge
colby banbury · Vijay Janapa Reddi · Alexander Elium · Shawn Hymel · David Tischler · Daniel Situnayake · Carl Ward · Louis Moreau · Jenny Plunkett · Matthew Kelcey · Mathijs Baaijens · Alessandro Grande · Dmitry Maslov · Arthur Beavis · Jan Jongboom · Jessica Quaye
|
||
Cuttlefish: Low-Rank Model Training without All the Tuning
Sparsity 1: Models and Algorithms
Hongyi Wang · Saurabh Agarwal · Pongsakorn U-chupala · Yoshiki Tanaka · Eric Xing · Dimitris Papailiopoulos
|
||
Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models
Storage, Scheduling, and Networking
Daochen Zha · Louis Feng · Liang Luo · Bhargav Bhushanam · Zirui Liu · Yusuo Hu · Jade Nie · Yuzhen Huang · Yuandong Tian · Arun Kejariwal · Xia Hu
|
||
XRBench: An Extended Reality (XR) Machine Learning Benchmark Suite for the Metaverse
Emerging Models and Domains
Hyoukjun Kwon · Krishnakumar Nair · Jamin Seo · Jason Yik · Debabrata Mohapatra · Dongyuan Zhan · JINOOK SONG · Peter Capak · Peizhao Zhang · Peter Vajda · Colby Banbury · Mark Mazumder · Liangzhen Lai · Ashish Sirasao · Tushar Krishna · Harshit Khaitan · Vikas Chandra · Vijay Janapa Reddi
|
||
Communication-Efficient Graph Neural Networks with Probabilistic Neighborhood Expansion Analysis and Caching
Parallel and Distributed Systems 2: Communication
Tim Kaler · Alexandros Iliopoulos · Philip Murzynowski · Tao Schardl · Charles E. Leiserson · Jie Chen
|
||
GiPH: Generalizable Placement Learning for Adaptive Heterogeneous Computing
ML for Systems
Yi Hu · Chaoran Zhang · Edward Andert · Harshul Singh · Aviral Shrivastava · James Laudon · Yanqi Zhou · Bob Iannucci · Carlee Joe-Wong
|
||
ALCOP: Automatic Load-Compute Pipelining in Deep Learning Compiler for AI-GPUs
Compilers
Guyue Huang · Yang Bai · Liu Liu · Yuke Wang · Bei Yu · Yufei Ding · Yuan Xie
|
||
Transcending Runtime-Memory Tradeoffs in Checkpointing by being Fusion Aware
Memory Optimization
Horace He · Shangdi Yu
|
||
HyperGef: A Framework Enabling Efficient Fusion for Hypergraph Neural Network on GPUs
Emerging Models and Domains
Zhongming Yu · Guohao Dai · Shang Yang · Genghan Zhang · Hengrui Zhang · Feiwen Zhu · June Yang · Jishen Zhao · Yu Wang
|
||
PyTorch RPC: Distributed Deep Learning Built on Tensor-Optimized Remote Procedure Calls
Storage, Scheduling, and Networking
Pritam Damania · Shen Li · Alban Desmaison · Alisson Azzolini · Brian Vaughan · Edward Yang · Gregory Chanan · Guoqiang Jerry Chen · Hongyi Jia · Howard Huang · Joseph Spisak · Luca Wehrstedt · Lucas Hosseini · Manoj Krishnan · Omkar Salpekar · Pavel Belevich · Rohan Varma · Satendra Gera · Wanchao Liang · Shihao Xu · Soumith Chintala · Chaoyang He · Amir Ziashahabi · Salman Avestimehr · · Zachary DeVito
|
||
FedTree: A Federated Learning System For Trees
Federated Learning
Qinbin Li · Zhaomin Wu · Yanzheng Cai · yuxuan han · Ching Man Yung · Tianyuan Fu · Bingsheng He
|
||
X-RLFLOW: GRAPH REINFORCEMENT LEARNING FOR NEURAL NETWORK SUBGRAPHS TRANSFORMATION
Compilers
Guoliang HE · Sean Parker · Eiko Yoneki
|
||
On Noisy Evaluation in Federated Hyperparameter Tuning
Federated Learning
Kevin Kuo · Pratiksha Thaker · Mikhail Khodak · John Nguyen · Daniel Jiang · Ameet Talwalkar · Virginia Smith
|
||
Exploiting Hardware Utilization and Adaptive Dataflow for Efficient Sparse Convolution in 3D Point Clouds
Sparsity 2: Systems
Ke Hong · Zhongming Yu · Guohao Dai · Xinhao Yang · Yaoxiu Lian · 泽浩 刘 · Ningyi Xu · Yu Wang
|
||
On Optimizing the Communication of Model Parallelism
Parallel and Distributed Systems 2: Communication
Yonghao Zhuang · Lianmin Zheng · Zhuohan Li · Eric Xing · Qirong Ho · Joseph Gonzalez · Ion Stoica · Hao Zhang · Hexu Zhao
|
||
FLINT: A Platform for Federated Learning Integration
Federated Learning
Ewen Wang · Boyi Chen · Mosharaf Chowdhury · Ajay Kannan · Franco Liang
|
||
Unified Convolution Framework: A compiler-based approach to support sparse convolutions
Sparsity 2: Systems
Jaeyeon Won · Changwan Hong · Charith Mendis · Joel Emer · Saman Amarasinghe
|
||
Efficiently Scaling Transformer Inference
Measurement and Analysis
Reiner Pope · Sholto Douglas · Aakanksha Chowdhery · Jacob Devlin · James Bradbury · Jonathan Heek · Kefan Xiao · Shivani Agrawal · Jeff Dean
|
||
Tutel: Adaptive Mixture-of-Experts at Scale
Parallel and Distributed Systems 1: Parallelism
Changho Hwang · Wei Cui · Yifan Xiong · Ziyue Yang · Ze Liu · Han Hu · Zilong Wang · Rafael Salas · Jithin Jose · Prabhat Ram · HoYuen Chau · Peng Cheng · Fan Yang · Mao Yang · Yongqiang Xiong
|
||
μ-TWO: 3× Faster Multi-Model Training with Orchestration and Memory Optimization
Storage, Scheduling, and Networking
Sanket Purandare · Abdul Wasay · Stratos Idreos · Animesh Jain
|
||
MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
Sparsity 1: Models and Algorithms
Trevor Gale · Deepak Narayanan · Cliff Young · Matei Zaharia
|
||
Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training
Parallel and Distributed Systems 2: Communication
Borui Wan · Juntao Zhao · Chuan Wu
|