Workshop
Journal of Opportunities, Unexpected limitations, Retrospectives, Negative results, and Experiences
Abhishek Gupta · Udit Gupta · Mayoore Jaiswal · Lillian Pentecost · Shagun Sodhani · David Brooks · Joelle Pineau
Computer systems and machine learning research is often driven by empirical results; improving efficiency and pushing the boundaries of the state of the art are essential goals that are continually furthered by the vetting and discussion of published academic work. However, we observe and experience that reflection, intermediate findings, and negative results are often quietly shelved in this process, despite the educational, scientific, and personal value in airing such experiences. Given the lack of emphasis on negative results, important lessons learned and reflections are neither captured nor maintained by our research communities, further exacerbating the problem.
To this end, we aim to establish a workshop venue centered on reflective and in-depth conversations on the meandering path towards research publications, the path that science is inherently all about: iterating over failures to arrive at a more robust understanding of the world.
JOURNE will combine invited talks from prominent ML and Systems researchers on the evolution of and reflection on research trends with specific contributed examples of negative results, retrospectives, and project post-mortems in the MLSys community. We will complement this programming with opportunities for candid discussion and constructive brainstorming about how and when these reflections, intermediate findings, missteps, and negative results are useful for the research community and how they can be supported and brought to light. Our goal is to bring the fundamental principles of scientific research back to the forefront.
Schedule
Fri 8:00 a.m. - 8:15 a.m.
|
Welcome to JOURNE
(
Introduction
)
|
Udit Gupta · Lillian Pentecost · Mayoore Jaiswal · Abhishek Gupta · Shagun Sodhani 🔗 |
Fri 8:15 a.m. - 9:00 a.m.
|
Thoughts on Research, Community and Impact
(
Invited Talk 1
)
Ideas that worked. Ideas that didn’t, Ideas that published. Ideas that didn’t publish but had impact. I will focus on the road of TVM from a research project to open source package to community to foundation of a company. |
Luis Ceze 🔗 |
Fri 9:00 a.m. - 9:45 a.m.
|
The Need for Ethical Oversight in Machine Learning
(
Invited Talk 2
)
When we participate in research with consequences, it’s important to reflect on how those consequences can have far reaching impacts, for better and for worse. This talk investigates important considerations for ethical research practice and reflections on available mechanisms for accountability. |
Deborah Raji 🔗 |
Fri 10:00 a.m. - 10:45 a.m.
|
The Future of ML is Tiny and Bright: Challenges and Opportunities
(
Invited Talk 3
)
|
Vijay Janapa Reddi 🔗 |
Fri 10:45 a.m. - 11:30 a.m.
|
Bringing your Research Ideas to Life in Real-world Products
(
Invited Talk 4
)
In this talk I will share my perspectives on the entire process of conceptualizing visionary research ideas, conducting and prototyping them and then of creating actual products out of them in an industrial setting. With various real-world examples I will share the missteps encountered and the learnings garnered from them, which have helped to shape my career. |
Shalini De Mello 🔗 |
Fri 12:30 p.m. - 1:30 p.m.
|
Industry/Academia Panel
(
Discussion Panel
)
|
Zachary C Lipton · Udit Gupta · Lillian Pentecost · Shagun Sodhani · Abhishek Gupta · Mayoore Jaiswal · Michael Carbin · Devi Parikh · Gennady Pekhimenko 🔗 |
Fri 1:30 p.m. - 1:45 p.m.
|
Applying Maximal Coding Rate Reduction to Text classification
(
Contributed Talk 1
)
Text classification is one of the fundamental tasks in natural language processing (NLP), and recent deep learning models have made great progress in this area. However, the text features obtained by common NLP models such as Transformers, textCNN, etc., suffer from a high degree of anisotropy which degenerates the expressiveness of the learned features. Maximal Coding Rate Reduction (MCR2) principle is to maximize the coding rate difference between the whole dataset and the sum of each individual class which can lead to more isotropic representation. We try to migrate MCR2 principle from image classification to text classification. The results show that applying MCR2 principle enables models to obtain more uniform and inter-category orthogonal text embeddings, but at the same time reduces the accuracy of text classification. |
Yuxin Liang 🔗 |
Fri 1:45 p.m. - 2:00 p.m.
|
Deploying Deep Learning Applications on FPGA: Experiences and Learnings
(
Contributed Talk 2
)
|
Ashwin Krishnan · Shagun Sodhani 🔗 |
Fri 2:00 p.m. - 2:15 p.m.
|
Fighting Ageism in datasets: How not to oversample images using GANs
(
Contributed Talk 3
)
In this paper, we introduce a method for dealing with imbalance in data based on SMOTE oversampling and GAN generated images. We assess the change of performance across different methods and discuss the results of the experiments. We also take an attempt of explaining why this method did not succeed to open the discussion on GAN-based resampling methods. |
Kamil Pluciński · Hanna Klimczak 🔗 |
Fri 2:15 p.m. - 2:30 p.m.
|
Pitfalls of Explainable ML: An Industry Perspective
(
Contributed Talk 4
)
As machine learning (ML) systems take a more prominent and central role in contributing to life-impacting decisions, ensuring their trustworthiness and accountability is of utmost importance. Explanations sit at the core of these desirable attributes of a ML system. The emerging field is frequently called “Explainable AI (XAI)” or “Explainable ML.” The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders. Many explanation techniques were developed with contributions from both academia and industry. However, there are several existing challenges that have not garnered enough interest and serve as roadblocks to widespread adoption of explainable ML. In this short paper, we enumerate challenges in explainable ML from an industry perspective. We hope these challenges will serve as promising future research directions, and would contribute to democratizing explainable ML. |
Sahil Verma 🔗 |
Fri 2:30 p.m. - 2:45 p.m.
|
Closing remarks
(
Closing
)
|
Udit Gupta · Lillian Pentecost · Abhishek Gupta · Mayoore Jaiswal · Shagun Sodhani 🔗 |