Research On Algorithms & Data Structures (ROADS) to Mega-AI Models

Zhaozhuo Xu · Aditya Desai · Anshumali Shrivastava

Room 239
[ Abstract ] Workshop Website
Thu 8 Jun, 5:30 a.m. PDT

The current state-of-the-art on numerous machine learning (ML) benchmarks comes from training enormous neural
network models on expensive, specialized hardware with massive quantities of data. However, this route to success in
deep learning is unsustainable. Training a large transformer model in natural language processing, for instance, can
incur a higher carbon footprint than the total lifetime cost of five cars1

. In addition, these state-of-the-art models require
immense memory and computing resources during deployment, which hinders their practical impact. To realize the
full promise and benefits of artificial intelligence, we must solve these scalability challenges prevalent in both training
and inference and design new algorithms with step-function improvements in efficiency.
This workshop aims to bring together both computer science researchers and practitioners focused on ML efficiency
to offer innovative solutions towards efficient modeling workflows grounded in principled algorithm design. We invite
papers that address the algorithmic efficiency and scalability of any component of the ML pipeline, including data
management, optimization algorithms, model training, and deployment. Topics of interest include, but are not limited
to, the following:
• Algorithms and data structures to improve the computational complexity of the forward and backward passes
within deep neural networks.
• Model compression approaches for training and inference, including pruning, quantization, parameter sharing,
• Data reduction (sketching, sampling, coresets, etc. ) and active sampling approach for faster training.
• Solutions to the large-scale nature of challenges in ML such as large-output prediction, large-vocabulary input,
enabling longer sequence transformers, higher resolution images, wider hidden layers, etc.
• Algorithmic solutions to the deployment challenges on resource-constrained devices like edge and mobile.
• Data structures for accelerating model inference, reducing memory, or accelerating training.

Chat is not available.
Timezone: America/Los_Angeles »