Skip to yearly menu bar Skip to main content



Invited Talks
Kathy Yelick

Machine learning is being used in nearly every discipline in science, from biology and environmental science to chemistry, cosmology and particle physics. Scientific data sets continue to grow exponentially due to improvements in detectors, accelerators, imaging, and sequencing as well as networks of environmental sensors and personal devices. In some domains, large data sets are being constructed, curated, and shared with the scientific community and data may be reused for multiple problems using emerging algorithms and tools for new insights. Machine learning adds a powerful set of techniques to the scientific toolbox, used to analyze complex, high-dimensional data, automate and control experiments, approximate expensive experiments, and augment physical models with models learned from data. I will describe some of the exciting applications of machine learning in science and some of challenges to ensure that learned models are consistent with known physical properties; to provide mechanistic models that offer new insights, and to correct for biases that arise from scientific instruments and processes.

On the systems side, scientists have always demanded some of the fastest computers for large and complex simulations and more recently for high throughput simulations that produce databases of annotated materials and more. Now the desire to train …

Jeannette Wing

Recent years have seen an astounding growth in deployment of AI systems in critical domains such as autonomous vehicles, criminal justice, healthcare, hiring, housing, human resource management, law enforcement, and public safety, where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern if these decisions can be trusted to be correct, reliable, fair, and safe, especially under adversarial attacks. How then can we deliver on the promise of the benefits of AI but address these scenarios that have life-critical consequences for people and society? In short, how can we achieve trustworthy AI?

Under the umbrella of trustworthy computing, there is a long-established framework employing formal methods and verification techniques for ensuring trust properties like reliability, security, and privacy of traditional software and hardware systems. Just as for trustworthy computing, formal verification could be an effective approach for building trust in AI-based systems. However, the set of properties needs to be extended beyond reliability, security, and privacy to include fairness, robustness, probabilistic accuracy under uncertainty, and other properties yet to be identified and defined. Further, there is a need for new property specifications and verification techniques to handle new kinds of artifacts, e.g., data distributions, …

William Dally

Deep learning has been enabled by powerful hardware and its progress is gated by improvements in hardware performance. This talk will review the current state of deep learning hardware and explore a number of directions to continue performance scaling in the absence of Moore’s Law.. Topics discussed will include number representation, sparsity, memory organization, optimized circuits, and analog computation.