The workshop begins at 9:50am in Room 240. Please see our schedule on our website.
Deep learning methods have made great strides in machine intelligence over the past few
years, but they are now having trouble keeping up with the growing amount of data and
resources. As traditional system architectures get closer to their physical limits, the problem of
compute scalability is getting worse, which makes it hard to predict how far AI methods and
systems can go in the future. These issues beg the question: What are alternative directions for
the next-generation of AI methods and systems that will run them?
Processing domains like analog, asynchronous, event-based, probabilistic, neuromorphic,
photonic, and quantum computing have all shown promise for faster, more efficient AI with new
capabilities through a complete shift in the way AI systems work.
The goal of this workshop is to kick off discussions about next-generation systems and methods that will help AI move forward, specifically through a realistic assessment of how these exotic emerging approaches for next-generation AI are making progress toward practical relevance and in what timeframes.
We want to help both experts and non-experts, believers and doubters, by achieving the
following goals:
(1) Educate about new processing technology and AI methods on the horizon.
(2) Evaluate the strengths and paths to practical viability of different approaches.
(3) Discuss methods to compare next-generation systems against traditional systems and
against each other.
(4) Inspire the integration of new technologies toward future AI methods and systems.
Thu 6:50 a.m. - 2:00 p.m.
|
SNAP Workshop
(
Workshop
)
link »
View the workshop program on our workshop website. |
🔗 |
Thu 6:50 a.m. - 7:00 a.m.
|
Opening Notes
(
Intro
)
|
🔗 |
Thu 7:00 a.m. - 7:50 a.m.
|
Keynote: The Neuromorphic Path to Faster, More Efficient, and More Intelligent Computing - Mike Davies (Intel)
(
Keynote
)
Mike Davies is Director of Intel’s Neuromorphic Computing Lab. Since 2014 he has been researching neuromorphic architectures, algorithms, software, and systems, and has fabricated several neuromorphic chip prototypes to date, including the Loihi series. In the 2000s, as a founding employee of Fulcrum Microsystems and director of its silicon engineering, Mike pioneered high-performance asynchronous design methods and led the development of several generations of industry leading Ethernet switches. Before that, he received B.S. and M.S. degrees from Caltech. |
🔗 |
Thu 7:55 a.m. - 8:10 a.m.
|
Light-AI Interaction: Bridging Photonics and AI with Cross-Layer Hardware-Software Co-Design
(
Paper Talk
)
|
🔗 |
Thu 8:10 a.m. - 8:25 a.m.
|
DOTA: A Dynamically-Operated Photonic Tensor Core for Energy-Efficient Transformer Accelerator
(
Paper Talk
)
|
🔗 |
Thu 8:25 a.m. - 9:00 a.m.
|
The Intel Neuromorphic Deep Noise Suppression Challenge - Jonathan Timcheck (Intel)
(
Invited
)
A critical enabler for progress in neuromorphic computing research is the ability to transparently evaluate different neuromorphic solutions on important tasks and to compare them to state-of-the-art conventional solutions. The Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge), inspired by the Microsoft DNS Challenge, tackles a ubiquitous and commercially relevant task: real-time audio denoising. Audio denoising is likely to reap the benefits of neuromorphic computing due to its low-bandwidth, temporal nature and its relevance for low-power devices. The Intel N-DNS Challenge consists of two tracks: a simulation-based algorithmic track to encourage algorithmic innovation, and a neuromorphic hardware (Loihi 2) track to rigorously evaluate solutions. For both tracks, we specify an evaluation methodology based on energy, latency, and resource consumption in addition to output audio quality. We make the Intel N-DNS Challenge dataset scripts and evaluation code freely accessible, encourage community participation with monetary prizes, and release a neuromorphic baseline solution which shows promising audio quality, high power efficiency, and low resource consumption when compared to Microsoft NsNet2 and a proprietary Intel denoising model used in production. We hope the Intel N-DNS Challenge will hasten innovation in neuromorphic algorithms research, especially in the area of training tools and methods for real-time signal processing. We expect the winners of the challenge will demonstrate that for problems like audio denoising, significant gains in power and resources can be realized on neuromorphic devices available today compared to conventional state-of-the-art solutions. |
🔗 |
Thu 9:00 a.m. - 9:15 a.m.
|
Leveraging Residue Number System for Designing High-Precision Analog Deep Neural Network Accelerators
(
Paper Talk
)
|
🔗 |
Thu 9:15 a.m. - 9:30 a.m.
|
Towards Cognitive AI System: a Survey and Prospective on Neuro-Symbolic AI
(
Paper Talk
)
|
🔗 |
Thu 11:00 a.m. - 11:35 a.m.
|
Accelerating AI Through Photonic Computing and Communication - Jessie Rosenberg (Lightmatter)
(
Invited
)
As AI workloads continue to grow, two major limiting factors are power consumption and interconnect bandwidth. By leveraging the high data throughput and scalability of photonic systems, silicon photonics presents an opportunity to break through performance bottlenecks in both of these areas. Photonic matrix multiplication systems can perform operations ~10x faster than the typical clock speed of electronic systems, while photonic interconnects improve memory bandwidth and enable larger and more flexible network topologies. Integration of photonics and CMOS enables scalability and cost advantages, and allows photonic components to seamlessly integrate with existing compute architectures and infrastructure. We will present recent developments in silicon photonics for AI workloads, discuss design and manufacturing challenges that allow scaling from the device to full system level, and contrast with other methods of analog and digital compute. |
🔗 |
Thu 11:35 a.m. - 12:10 p.m.
|
Neural Circuit Theory: Bridging the Gap Between Neuroscience and Deep Learning - Ben Scellier (Rain Neuromorphics)
(
Invited
)
We introduce Neural Circuit Theory (NCT), a mathematical framework which bridges neuroscience, deep learning, and electrical circuit theory. We show how NCT can describe biological neural circuits and leads to physical formulations of bio-plausible algorithms for credit assignment, such as Equilibrium Propagation and Difference Target Propagation. We show how these formulations can lead to quadratic speedups in the inference and training speed of energy-based models as well as estimation of curvature information in feedforward networks. Finally, we discuss the geometric structure that is embedded in NCT, which naturally contains information about the topology of the network. |
🔗 |
Thu 12:10 p.m. - 12:45 p.m.
|
Quantum (and) AI: The Next Generation of Computing - Stefan Leichenauer (SandboxAQ)
(
Invited
)
We are still many years away from large-scale quantum computers, which are poised for massive impact in a number of areas. Quantum computers will be used as part of general, hybrid computing platforms in the cloud, and will provide a key ingredient to push through barriers impossible to breach without them. They are not magical devices: classical computing, powered by AI, will still be handling most of the workload. It is possible that Quantum will also unlock new advances in AI itself, though this is not something to take for granted. In this talk I will discuss all of these issues, including steps we can take today to prepare for the quantum future. |
🔗 |
Thu 12:50 p.m. - 1:20 p.m.
|
Cross-Layer Optimization for AI with Algorithm-Hardware Co-design - Helen Li (Duke University)
(
Invited
)
The advancement of Artificial Intelligence (AI) and its swift deployment on resource-constrained systems relies on refined algorithm-hardware co-design. In this talk, we first propose our solution to craft efficient lightweight AI algorithms via model compression and neural architecture search on broad AI applications, such as image recognition, 2D/3D semantic segmentation, and recommender systems. Then, we involve efficient cross-optimization design and distributed learning to brew swift scalable AI algorithms with specialized compute kernel and hardware architecture. Finally, we demonstrate the improvements in performance-efficiency trade-off on alternative real-world applications, such as electronic design automation and adversarial machine learning. Through these explorations, we present our vision for the future of the full-stack optimization of AI solutions. |
🔗 |
Thu 1:20 p.m. - 1:55 p.m.
|
Speaker Panel and Open Discussion
(
Panel
)
Join our invited speakers in open discussion and debate on next-generation AI computing approaches! |
🔗 |