MLSys 2020 Tentative Schedule Overview
Monday March 2nd
- 7:00 - 7:45 am Breakfast & Registration
- 7:45 - 8:00 am Opening Remarks
- 8:00 - 10:05 am Session 1 (5 papers): Distributed and parallel learning algorithms
- A System for Massively Parallel Hyperparameter Tuning
- PLink: Discovering and Exploiting Locality for Accelerated Distributed Training on the public Cloud
- Federated Optimization in Heterogeneous Networks
- BPPSA: Scaling Back-propagation by Parallel Scan Algorithm
- Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems
- 10:05 - 10:30 am Coffee Break
- 10:30 - 12:10 pm Session 2 (4 papers): Efficient model training
- Resource Elasticity in Distributed Deep Learning
- SLIDE: Training Deep Neural Networks with Large Outputs on a CPU faster than a V100-GPU
- FLEET: Flexible Efficient Ensemble Training for Heterogeneous Deep Neural Networks
- Breaking the Memory Wall with Optimal Tensor Rematerialization
- 12:10 - 1:30 pm Lunch on your own
- 1:30 - 2:30 pm Keynote: Chris Ré: Theory and Systems for Weak Supervision
- 2:30 - 4:10 pm Session 3 (4 papers): Efficient inference and model serving
- What is the State of Neural Network Pruning?
- SkyNet: a Hardware-Efficient Method for Object Detection and Tracking on Embedded Systems
- MNN: A Universal and Efficient Inference Engine
- OptX: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference
- 4:10 - 4:30 pm Coffee Break
- 4:30 - 6:10 pm Session 4 (4 papers): Model/Data Quality and Privacy
- AimNet: Attention-based Learning for Missing Data Imputation
- Privacy-Preserving Bandits
- Understanding the Downstream Instability of Word Embeddings
- Model Assertions for Monitoring and Improving ML Models
- 6:10 - 6:15 pm Demo Previews
- 6:30 - 9:00 pm Posters, Demos, & Reception (dinner + drinks)
Tuesday March 3rd
- 7:00 - 8:00 am Breakfast & Registration
- 8:00 - 10:05 am Session 5 (5 papers): ML programming models and abstractions & ML applied to systems
- Juggling HLS Phase Orderings in Random Forests with Deep Reinforcement Learning
- Automatically batching control-intensive programs for modern accelerators
- Predictive Precompute with Recurrent Neural Networks
- Sense & Sensitivities: The Path to General-Purpose Algorithmic Differentiation
- Ordering Chaos: Memory-Aware Scheduling of Irregularly Wired Neural Networks for Edge Devices
- 10:05 - 10:30 am Coffee Break
- 10:00 - 12:10 pm Session 6: Efficient inference and model serving
- Fine-Grained GPU Sharing Primitives for Deep Learning Applications
- Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc
- OPTIMUS: OPTImized matrix MUltiplication Structure for Transformer neural network accelerator
- PoET-BiN: Power Efficient Tiny Binary Neurons
- 12:10 - 1:30 pm Lunch on your own
- 1:30 - 2:30 pm Keynote: Shafi Goldwasser: The Emerging Role of Cryptography in Trustworthy AI
- 2:30 - 4:10 pm Session 7: Quantization of deep neural networks
- Memory-Driven Mixed Low Precision Quantization for Enabling Deep Network Inference on Microcontrollers
- Trained Quantization Thresholds for Accurate and Efficient Fixed-Point Inference of Deep Neural Networks
- Riptide: Fast End-to-End Binarized Neural Networks
- Searching for Winograd-aware Quantized Networks
- 4:10 - 4:30 pm Coffee Break
- 4:30 - 6:00 pm Session 8: Efficient model training
- Blink: Fast and Generic Collectives for Distributed ML
- A Systematic Methodology for Analysis of Deep Learning Hardware and Software Platforms
- MotherNets: Rapid Deep Ensemble Learning
- MLPerf Training Benchmark
- 6:10 - 6:15 pm Closing Remarks & MLSys 2021