"We invite participation in the Graph Neural Networks and Systems Workshop, to be held in conjunction with MLSys 2021.
Overview:
Graph Neural Networks (GNNs) have emerged as one of the most popular areas of research in the field of machine learning and artificial intelligence. The core idea is to explore the relationships among data samples to learn high-quality node, edge, and graph representations. In just the span of a few years, GNNs have expanded from mostly theoretical and small-scale studies to providing state-of-the-art solutions to many problems arising in diverse application domains. This includes domains that traditionally relied on graph learning (e.g., information retrieval, recommendations, fraud detection, knowledge representation), to science and engineering domains whose underlying data can be naturally represented via graphs (e.g., chemistry, bioinformatics, drug discoveries, material science, physics, circuit design), and to areas of science and engineering that have not traditionally been the domain of graph methods (e.g., computer vision, natural language processing, computer graphics, reinforcement learning).
GNN research and application present new and unique challenges to system designs. Industrial users and researchers share the same requirements in some of the requirements, but diverge in others. This landscape also rapidly evolves as new research results appear.
In the same spirit as MLSys, the goal of this workshop is to bring together experts working at the intersection of machine learning research and systems building, with a particular focus on GNN. Topics include, but not limited to:
●Systems for training and serving GNN models at scale
●System-level techniques to deal with complex graphs (heterogeneous, dynamic, temporal, etc.)
●Integration with graph and relational databases
●Distributed GNN training algorithms for large graphs
●Best practices to integrate with existing machine learning pipelines
●Specialized or custom hardware for GNN
●GNN model understanding tools (debugging, visualization, introspection, etc.)
●GNN applications to improve system design and optimizations
Through invited talks as well as oral and poster presentations by the participants, this workshop will showcase the latest advances in GNN systems and address challenges at the intersection of and GNN research and system design."
Fri 7:00 a.m. - 7:10 a.m.
|
Welcome
(
Opening remarks
)
|
🔗 |
Fri 7:10 a.m. - 7:40 a.m.
|
Keynote Talk: Graph Neural Networks For Learning About Never Before Seen Phenomena by Marinka Zitnik (Harvard)
(
Talk
)
Prevailing methods for graphs require abundant label and edge information for learning. However, labeled examples can be incredibly scarce in the case of the hardest and most impactful problems in science and medicine, such as novel drugs in development, emerging pathogens never seen before, and patients with rare diseases. In this talk, I describe our efforts to expand the scope and ease the applicability of graph representation learning for such challenging problems. First, I outline SubGNN, a subgraph neural network for learning disentangled subgraph embeddings. SubGNN generates embeddings that capture complex subgraph topology, including structure, neighborhood, and position of subgraphs. Second, I will introduce G-Meta, a theoretically justified meta-learning algorithm for graphs. G-Meta quickly adapts to a new task using only a handful of nodes or edges in the new task and does so by learning from local subgraphs in other graphs or related, albeit disjoint, label sets. Finally, I will discuss applications. The new methods successfully predicted treatments for an emerging disease, which were later experimentally confirmed in the wet laboratory. Further, the methods helped discover dozens of ultra high-order combinations of drugs safe for patients with considerably fewer unwanted side effects than today's treatments. Lastly, I describe our efforts in learning actionable representations that allow users to receive predictions that can be interpreted meaningfully. |
🔗 |
Fri 7:40 a.m. - 7:50 a.m.
|
Q&A
|
🔗 |
Fri 7:50 a.m. - 8:20 a.m.
|
Keynote Talk: Graphcore’s IPU and GNNs by Gianandrea Minneci (Graphcore)
(
Talk
)
Recent research on Graph Neural Networks (GNNs) has shown that these algorithms can exceed state-of-the-art performance on applications with graph-structured inputs. These models present a new challenge for current machine learning accelerators because they combine dense, compute intensive operations with sparse, memory intensive operations. Furthermore, certain applications need to scale to graphs with up to billions of edges and unbalanced connections that follow power-law distributions. We present these requirements and describe the implications for machine learning accelerators. We describe Graphcore’s Colossus MK2 GC200 IPU Intelligent Processing Unit (IPU) and multi-processor systems based on the M2000 platform. The IPU adopts a radical approach towards accessing local memory at a fixed cost, independent of patterns by distributing its large SRAM and MIMD compute cores. We explain how systems based on the M2000 server leverage multi-phase execution paradigms to scale to thousands of IPUs and give them direct access to TB of DRAM, providing an effective scale up solution. |
🔗 |
Fri 8:20 a.m. - 8:30 a.m.
|
Q&A
|
🔗 |
Fri 8:30 a.m. - 8:50 a.m.
|
Break or virtual discussion
link »
A virtual discussion space is available by clicking the link below. |
🔗 |
Fri 8:50 a.m. - 9:20 a.m.
|
Keynote Talk: Graph Representation Learning for Chip Design by Azalia Mirhoseini (Google)
(
Talk
)
Many core problems in systems and hardware design are combinatorial optimization or decision making tasks on graph structured data. Examples of such problems are compiler optimization, physical design, or design verification, where the programs or hardware are described in graph formats. These computational graphs pose new challenges to ML-based algorithms since their state and action spaces are orders of magnitude larger than common AI benchmarks in robotics and games. In this talk, I will go over some of our research on tackling optimization problems on graph data and present our recent work on optimizing chip floorplanning with reinforcement learning. Our approach has the ability to learn from past experience and improve over time. The optimization relies on a new edge-based graph convolution model that captures the properties of the chip description graph and the placement policy can generalize to unseen blocks. Our objective is to minimize PPA (power, performance, and area), and we show that, in under 6 hours, our method can generate placements that are superhuman or comparable on modern accelerator chips, whereas existing baselines require human experts in the loop and can take several weeks. |
🔗 |
Fri 9:20 a.m. - 9:30 a.m.
|
Q&A
|
🔗 |
Fri 9:30 a.m. - 10:00 a.m.
|
Keynote Talk: High Performance GNNs in JAX by Jonathan Godwin (DeepMind)
(
Talk
)
Jraph (pronounced "giraffe") is a lightweight library for working with graph neural networks in JAX. It provides a data structure for graphs, a set of utilities for working with graphs, and a 'zoo' of forkable graph neural network models. In this talk we’ll cover the basics of Jraph, XLA and of graph nets, including how we manage padding for graphs with dynamic edge and node shapes. Then we’ll discuss how JAX makes it easier for us to write new kinds of graph neural networks with interesting applications in scientific domains such as simulation. Finally we’ll also cover how we can straightforwardly use jax to shard a graph net across multiple devices, allowing training on graphs with millions (or billions) of edges. |
🔗 |
Fri 10:00 a.m. - 10:10 a.m.
|
Q&A
|
🔗 |
Fri 10:10 a.m. - 10:30 a.m.
|
Break or virtual discussion
link »
A virtual discussion space is available by clicking the link below. |
🔗 |
Fri 10:30 a.m. - 11:30 a.m.
|
Poster session
(
Posters
)
link »
The poster session is available by clicking the bleu "Link ≫" at the bottom. List of posters: Poster 3: FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks, Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Yu Rong, Peilin Zhao, Junzhou Huang, Murali Annavaram and Salman Avestimehr Poster 4: IGNNITION: A framework for fast prototyping of Graph Neural Networks, David Pujol-Perich, José Suárez-Varela, Miquel Ferriol-Galmés, Shihan Xiao, Bo Wu, Albert Cabellos-Aparicio and Pere Barlet-Ros Poster 5: Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions, Shyam Tailor, Felix Opolka, Pietro Lio and Nicholas Lane Poster 7: NetXplain: Real-time explainability of Graph Neural Networks applied to Computer Networks, David Pujol-Perich, José Suárez-Varela, Shihan Xiao, Bo Wu, Albert Cabellos-Aparicio and Pere Barlet-Ros Poster 8: Efficient Data Loader for Fast Sampling-based GNN Training on Large Graphs, Youhui Bai, Cheng Li, Zhiqi Lin, Yufei Wu, Youshan Miao, Yunxin Liu and Yinlong Xu Poster 9: Directional Graph Networks, Dominique Beaini, Saro Passaro, Vincent Létourneau, William L. Hamilton, Gabriele Corso and Pietro Lio Poster 10: Graphiler: A Compiler for Graph Neural Networks, Zhiqiang Xie, Zihao Ye, Minjie Wang, Zheng Zhang and Rui Fan Poster 12: Analyzing the Performance of Graph Neural Networks with Pipe Parallelism, Matthew T. Dearing and Xiaoyan Wang Poster 14: Reducing Communication in Graph Neural Network Training, Alok Tripathy, Katherine Yelick and Aydin Buluc Poster 15: Deep Graph Learning for Program Analysis and System Optimization, Yao Xiao, Guixiang Ma, Nesreen K. Ahmed, Theodore L. Willke, Shahin Nazarian and Paul Bogdan Poster 16: Efficent Distribution for Deep Learning on Large Graphs, Loc Hoang, Xuhao Chen, Hochan Lee, Roshan Dathathri, Gurbinder Gill and Keshav Pingali Poster 18: Adaptive Load Balancing for Parallel GNN Training, Qidong Su, Minjie Wang, Da Zheng and Zheng Zhang Poster 19: Privacy-Preserving Heterogeneous Network Embedding for Clinical Events, Gustavo Lima de Oliveira, Ricardo Marcondes Marcacini and Maria da Graça Campos Pimentel PDF of the papers: https://gnnsys.github.io/#papers |
🔗 |
Fri 11:30 a.m. - 12:30 p.m.
|
Break or virtual discussion
link »
A virtual discussion space is available by clicking the link below. |
🔗 |
Fri 12:30 p.m. - 1:00 p.m.
|
Keynote Talk: GNNs for Charged Particle Reconstruction at the Large Hadron Collider by Savannah Thais (Princeton)
(
Talk
)
The Large Hadron Collider (LHC) collides millions of protons per second, yielding a rich, multi-dimensional dataset with unique mathematical constraints. The raw data from particle detector electronics readouts must be processed to identify the interactions, trajectories, and decays of individual particles in order to enable downstream physics measurements. Traditional approaches to these tasks have relied on constructing physics-motivated variables from the raw data and using these variables as input to physics-based fits and algorithms. Recently, however, recent work has demonstrated that geometric deep learning approaches can effectively leverage the inherent geometries and relationships in raw collider data, often resulting in more efficient and more accurate particle reconstruction. This talk will describe the use of a range of recent GNN architectures for physics reconstruction tasks and, in particular, will focus on the use edge-classifying GCNs, Interaction Networks, and 3D Instance Segmentation techniques for the task of charged particle trajectory reconstruction, or tracking. |
🔗 |
Fri 1:00 p.m. - 1:10 p.m.
|
Q&A
|
🔗 |
Fri 1:10 p.m. - 1:40 p.m.
|
Keynote Talk: Machine Learning on Dynamic Graphs: Temporal Graph Networks by Emanuele Rossi (Imperial/Twitter)
(
Talk
)
Graph neural networks (GNNs) research has surged to become one of the hottest topics in machine learning in the last years. GNNs have seen a series of recent successes in problems from the fields of biology, chemistry, social science, physics, and many others. So far, GNN models have been primarily developed for static graphs that do not change over time. However, many interesting real-world graphs are dynamic and evolving in time, with prominent examples including social networks, financial transactions, and recommender systems. In many cases, it is the dynamic behavior of such systems that conveys important insights, otherwise lost if one considers only a static graph. This talk will discuss Temporal Graph Networks, a recent and general approach for machine learning over dynamic graphs. |
🔗 |
Fri 1:40 p.m. - 1:50 p.m.
|
Q&A
|
🔗 |
Fri 1:50 p.m. - 2:10 p.m.
|
Break or virtual discussion
link »
A virtual discussion space is available by clicking the link below. |
🔗 |
Fri 2:10 p.m. - 2:40 p.m.
|
Keynote Talk: Efficient GNNs: How Can Graphs Go From Last To Fast? by Nicholas Lane (Cambridge)
(
Talk
)
In recent years the efficiency landscape of deep learning has been transformed. We can now scale training to support trillion parameter models, and execute inference on micro-controllers with just a few KBs of memory. But progress in ML efficiency has largely ignored the world of graph neural networks (GNNs) in favor of more established neural architectures and tasks. This must change if GNNs are to deliver on their promise of revolutionizing various application domains ranging from drug discovery to recommendation systems. Today GNNs are simply too difficult to scale and deploy. In this talk, I will describe two of our recent steps towards addressing the open challenges of GNN efficiency. The first, Degree-Quant -- one of the only GNN-specific quantization schemes that we show offers up to 4.7x inference acceleration with negligible impact on accuracy. And second, a brand-new light-weight GNN architecture -- Efficient Graph Convolutions (EGC). Using EGC memory needs are lowered from O(E) to O(V), a quadratic saving, while we find it is still able to achieve SOTA-level GNN performance. We hope these two methods can act as a useful foundation towards the scalable and efficient GNNs that will be required as this field continues to evolve. |
🔗 |
Fri 2:40 p.m. - 2:50 p.m.
|
Q&A
|
🔗 |
Fri 2:50 p.m. - 3:20 p.m.
|
Keynote Talk: Graph Neural Networks: Moving from Research to Commercial Applications by George Karypis (University of Minnesota/AWS)
(
Talk
)
In the course of just a few years, Graph Neural Networks (GNNs) have emerged as the prominent supervised learning approach that brings the power of deep representation learning to graph and relational data. An ever-growing body of research shows that GNNs achieve state-of-the-art performance for problems such as link prediction, fraud detection, target-ligand binding activity prediction, knowledge-graph completion, and product recommendations. As a result, GNNs are quickly moving from the realm of academic research involving small graphs to powering commercial applications and very large graphs. In this talk we will provide an overview of our work to address the needs of commercial applications, which includes improving the computational efficiency and scaling of GNN model training for extremely large graphs and making it easy for developers to train and use GNN-based models by integrating graph-based ML techniques in graph databases. |
🔗 |
Fri 3:20 p.m. - 3:30 p.m.
|
Q&A
|
🔗 |
Fri 3:30 p.m. - 3:40 p.m.
|
Closure
(
Closing remarks
)
|
🔗 |
Fri 3:40 p.m. - 6:00 p.m.
|
Virtual discussion
link »
A virtual discussion space is available by clicking the link below. |
🔗 |