Timezone: »

Resource-Constrained Machine Learning (ReCoML 2020)
Yaniv Ben Itzhak · Nina Narodytska · Christopher Aberger

Wed Mar 07:00 AM -- 03:30 PM PST @ Level 3 Room 8
Event URL: https://sites.google.com/view/recomlsys2020/home »

* Notice that you need to register for the conference to be able to attend the workshop *

The workshop will cover broad aspects that are related to ML over resource-constrained environments, such as Internet-of-Things (IoT) devices, and edge-computing. Resource-constrained ML is challenging due to several reasons: First, current ML models usually have high resource requirements in terms of CPU, memory and I/O. Naive solutions that reduce these resource consumption would result in significant ML performance degradation. Therefore, new ML models and frameworks are required in order to employ ML with reasonable ML performance over resource-constrained environments. Second, resource-constrained environments, such as edge computing and IoT, are usually being used for real-time applications. Hence, the model serving is a critical issue, such that an ML model should respond quickly and accurately while being employed over limited resources. The workshop will specifically include the following topics: model/hardware architectures, models compression, interpretability, use-cases.

The organizers will select papers based on a combination of novelty, quality, interest, and impact.
Topics of interest include, but are not limited to:

- Compression of deep ML model architectures
- Quantized and low-precision neural networks
- Optimization of ML model architectures for resource-constrained environments
- Hardware accelerators for deep ML models
- Explainability of ML models in the context of resource-constrained environments
- ML deployments over resource-constrained environments, e.g. Internet-of-Things (IoT) devices and edge-computing.

Reviewing process: All submissions should include the author’s names and their affiliations. The authors are allowed to post their paper on arXiv or other public forums.

Key dates related to the reviewing process are given below:
Paper submission deadline: January 15, 2020 AoE (at midnight anywhere on earth)
Decision notification: January 27, 2020

We invite research contributions in different formats:
Original research papers (up to 6 pages, not including references)
Position, opinion papers and extended abstracts (up to 4 pages, not including references)

Submission link: link

Dual submission policy: We will not accept any paper which, at the time of submission, is under review for another workshop or has already been published. This policy also applies to papers that overlap substantially in technical content with conference papers under review or previously published.
Proceedings: Accepted papers will be published in the form of online proceedings.
Submission format: To prepare your submission to ReCoML 2020, please use the LaTeX style files provided at SML2020style.tar.gz . Submitted papers will be in a 2-column format, each reference must explicitly list all the authors of the paper.

Organizing Committee
Yaniv Ben-Itzhak, VMware Research, ybenitzhak (at) vmware (dot) com
Nina Narodytska, VMware Research, nnarodytska (at) vmware (dot) com
Christopher R. Aberger, Stanford and SambaNova Systems, christopher.aberger (at) sambanovasystems (dot) ai

08:25 AM Welcome (Opening notes)||
08:30 AM Shared Clusters for Machine Learning: Through the looking glass, by Prof. Shivaram Venkataraman, University of Wisconsin (Invited Talk)||
09:15 AM QuaRL: Quantized Reinforcement Learning (Presentation)||
09:30 AM Optimizing Sparse Matrix Operations for Deep Learning (Presentation)||
09:45 AM Energy-Aware DNN Graph Optimization (Presentation)||
10:00 AM Lunch (Break)||
12:00 PM Low-Precision Arithmetic in Machine Learning Systems, by Prof. Christopher De Sa, Cornell (Invited Talk)||
12:45 PM Efficient Memory Management for Deep Neural Net Inference (Presentation)||
01:00 PM Once for All: Train One Network and Specialize it for Efficient Deployment (Presentation)||
01:15 PM GReTA: Hardware Optimized Graph Processing For GNNs (Presentation)||
01:30 PM Afternoon Break (Break)||
02:00 PM Optimizing JPEG Quantization for Classification Networks (Presentation)||
02:15 PM Conditional Neural Architecture Search (Presentation)||
02:30 PM Transfer Learning with Fine-grained Sparse Networks: From Efficient Network Perspective (Presentation)||
02:45 PM Closing remarks (End)||

Author Information

Yaniv Ben Itzhak (VMware)
Nina Narodytska (VMWare)
Christopher Aberger (SambaNova Systems and Stanford University)

More from the Same Authors