Personalized recommendation is the task of recommendation content to users based on their preferences and history. Providing personalized content is crucial for many emerging applications including health care, fitness, education, food, and entertainment. Today, accurate and efficient recommendation of items power many Internet services such as online search, marketing, e-commerce, and video streaming. In fact, recent estimates show that recommendation systems drive many Internet businesses. In 2018, estimates show that recommendation systems drove up-to 35% of Amazon’s revenue, 75% of movies watched on Netflix, and 60% of videos on Youtube. In addition, the fraction of cycles devoted to serving personalized recommendation models in Facebook’s datacenter -- recommendation accounts for 80% of all AI inference cycles.
While the machine learning and systems research community has devoted significant effort to optimize AI and in particular deep neural networks, the majority of work studies AI-enabled perception, speech recognition, and natural language processing. As a result, efforts across machine learning and systems researchers have primarily focused on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). However, not all services use CNNs and RNNs. In fact, as deep learning forms the backbone of many Internet services, AI for personalized recommendation is arguably one of the most impactful, widely used, and understudied applications of DNNs.
In addition to their importance, modern deep learning solutions for personalized recommendation impose unique compute, memory access, and storage requirements compared to CNNs and RNNs. However, in 2019, less than 2% of research papers were devoted to optimizing systems for recommendation engines.
To address this underinvestment from the research community, we propose a venue to discuss, share, and foster research into personalized recommendation systems and algorithms.
"We invite participation in the Graph Neural Networks and Systems Workshop, to be held in conjunction with MLSys 2021.
Graph Neural Networks (GNNs) have emerged as one of the most popular areas of research in the field of machine learning and artificial intelligence. The core idea is to explore the relationships among data samples to learn high-quality node, edge, and graph representations. In just the span of a few years, GNNs have expanded from mostly theoretical and small-scale studies to providing state-of-the-art solutions to many problems arising in diverse application domains. This includes domains that traditionally relied on graph learning (e.g., information retrieval, recommendations, fraud detection, knowledge representation), to science and engineering domains whose underlying data can be naturally represented via graphs (e.g., chemistry, bioinformatics, drug discoveries, material science, physics, circuit design), and to areas of science and engineering that have not traditionally been the domain of graph methods (e.g., computer vision, natural language processing, computer graphics, reinforcement learning).
GNN research and application present new and unique challenges to system designs. Industrial users and researchers share the same requirements in some of the requirements, but diverge in others. This landscape also rapidly evolves as new research results appear.
In the same spirit as MLSys, the goal of this workshop is to bring together experts working at the intersection of machine learning research and systems building, with a particular focus on GNN. Topics include, but not limited to:
●Systems for training and serving GNN models at scale
●System-level techniques to deal with complex graphs (heterogeneous, dynamic, temporal, etc.)
●Integration with graph and relational databases
●Distributed GNN training algorithms for large graphs
●Best practices to integrate with existing machine learning pipelines
●Specialized or custom hardware for GNN
●GNN model understanding tools (debugging, visualization, introspection, etc.)
●GNN applications to improve system design and optimizations
Through invited talks as well as oral and poster presentations by the participants, this workshop will showcase the latest advances in GNN systems and address challenges at the intersection of and GNN research and system design."
Ubiquitous on-device artificial intelligence (AI) is the next step in transforming the myriad of mobile computing devices in our everyday lives into a new class of truly “smart” devices capable of constantly observing, learning and adapting to their environment. The 2nd on-device intelligence workshop aims to advance the state-of-the-art by bringing together researchers and practitioners to discuss the key problems, disseminate new research results, and provide practical tutorial material.
"This workshop focuses on the challenges involved in building integrated scalable distributed systems for the healthcare analytics domains. Healthcare analytics offers a unique opportunity to explore scalable system design since there has been a tectonic shift in the ability of medical institutions to capture and store unprecedented amount of structured and unstructured medical data, including the new ability to stream unstructured medical data in real time. This shift has already contributed to an ecosystem of Machine Learning (ML) models being trained for a variety of clinical tasks. However, new approaches are required to build systems that can develop and deploy ML models based on distributed healthcare data that must necessarily be accessed with privacy-preserving constraints.
The goal of this workshop is to attract leading researchers to share and discuss their latest results involving approaches to building scalable platforms for privacy-aware collaborative learning and inference that can be applicable to the domain of healthcare analytics. The scope of the workshop includes (but is not limited to) the following challenges:
* Scalable and distributed learning
* Continuous federated learning with privacy constraints
* Enforcing soft real-time constraints for streaming data analytics
* Specialized heterogeneous hardware for learning and inference
* Scalable runtime and resource allocation systems
* Productive systems for developing scalable data analytics applications"
Computer systems and machine learning research is often driven by empirical results; improving efficiency and pushing the boundaries of the state of the art are essential goals that are continually furthered by the vetting and discussion of published academic work. However, we observe and experience that reflection, intermediate findings, and negative results are often quietly shelved in this process, despite the educational, scientific, and personal value in airing such experiences. Given the lack of emphasis on negative results, important lessons learned and reflections are neither captured nor maintained by our research communities, further exacerbating the problem.
To this end, we aim to establish a workshop venue centered on reflective and in-depth conversations on the meandering path towards research publications, the path that science is inherently all about: iterating over failures to arrive at a more robust understanding of the world.
JOURNE will combine invited talks from prominent ML and Systems researchers on the evolution of and reflection on research trends with specific contributed examples of negative results, retrospectives, and project post-mortems in the MLSys community. We will complement this programming with opportunities for candid discussion and constructive brainstorming about how and when these reflections, intermediate findings, missteps, and negative results are useful for the research community and how they can be supported and brought to light. Our goal is to bring the fundamental principles of scientific research back to the forefront.
"With evolving system architectures, hardware and software stack, diverse machine learning workloads, and data, it is important to understand how these components interact with each other. Well-defined benchmarking procedures help evaluate and reason the performance gains with ML workload to system mapping.
Key problems that we seek to address are: (i) which representative ML benchmarks cater to workloads seen in industry, national labs, and interdisciplinary sciences; (ii) how to characterize the ML workloads based on their interaction with hardware; (iii) what novel aspects of hardware, such as heterogeneity in compute, memory, and bandwidth, will drive their adoption; (iv) performance modeling and projections to next-generation hardware.
The workshop will invite experts in these research areas to present recent work and potential directions to pursue. Accepted papers from a rigorous evaluation process will present state-of-the-art research efforts. A panel discussion will foster an interactive platform for discussion between speakers and the audience."