Timezone: »

 
Workshop
SARA: Secure and Resilient Autonomy
Pradip Bose · Nandhini Chandramoorthy · Augusto Vega · Karthik Swaminathan

Wed Mar 04 07:00 AM -- 03:30 PM (PST) @ Level 1 Room 3
Event URL: http://sara-workshop.org »

This workshop will bring classical system architecture and design experts and AI/ML algorithmic experts together in one forum. The goal is to brainstorm about challenges in designing secure and resilient AI-centric systems in general, but with a special focus on autonomous systems (such as self-driving cars and industrial robots) - where safety and security are of paramount value.

The knowledge and expertise of classical mainframe and server architects who are experts in designing ultra-reliable and secure systems will be blended with domain experts in AI - particularly those with an established expertise in developing reliable and secure AI algorithms.

Detailed workshop information, abstract submission instructions, dates: https://sara-workshop.org

Wed 7:00 a.m. - 7:05 a.m.
Introduction: Nandhini Chandramoorthy (IBM) (Welcoming Remarks)
Wed 7:05 a.m. - 7:50 a.m.
Keynote I: Dr. Thomas Rondeau (DARPA): Secure and Resilient - a DARPA View (Keynote presentation)
Wed 7:50 a.m. - 8:05 a.m.
Coffee Break + Discussion (Break)
Wed 8:05 a.m. - 8:25 a.m.

Abstract: Physically Unclonable Functions (PUF) and True Random Number Generators (TRNG) are foundational security primitives underpinning the root of trust in computing platforms. Contradictory design strategies to harvest static and dynamic entropies typically necessitate independent PUF and TRNG circuits, adding to design cost. This tutorial describes a unified static and dynamic entropy generator leveraging a common entropy source for simultaneous PUF and TRNG operation. We will present self-calibration techniques to run-time segregate bitcells into PUF and TRNG candidates, along with entropy extraction techniques to maximize TRNG entropy while stabilizing PUF bits. Cryptographic circuits such as Advanced Encryption Standard (AES) are vulnerable to correlation power analysis (CPA) side-channel attacks (SCA), where an adversary monitors supply current signatures of a chip to decipher the value of embedded keys. This tutorial will also discuss the use of arithmetic/circuit countermeasures to minimize the correlation of the AES current to embedded keys, thereby improving the SCA resistance of the hardware by 1200x in both time and frequency-domains.

Bio: Bio: Sanu Mathew is a Senior Principal Engineer with the Circuits Research Labs at Intel Corporation, Hillsboro, Oregon, where he heads the security arithmetic circuits research group, responsible for developing special-purpose hardware accelerators for cryptography and security. He received his Ph.D. degree in Electrical and Computer Engineering from State University of New York at Buffalo in 1999. He holds 62 issued patents, has 20 patents pending and has published over 80 conference/journal papers. He is a Fellow of the IEEE.

Wed 8:25 a.m. - 8:40 a.m.

ABSTRACT As Convolutional Neural Networks (CNNs) are increasingly being employed in safety-critical applications, it is important that they behave reliably in the face of hardware errors. Transient hardware errors may percolate undesirable state during execution, resulting in software-manifested errors which can adversely affect high-level decision making. This talk will present HarDNN, a software-directed approach to identify vulnerable computations during a CNN inference and selectively protect them based on their propensity towards corrupting the inference output in the presence of a hardware error. We show that HarDNN can accurately estimate relative vulnerability of a feature map (fmap) in CNNs using a statistical error injection campaign, and explore heuristics for fast vulnerability assessment. Based on these results, we analyze the tradeoff between error coverage and computational overhead that the system designers can use to employ selective protection.

Wed 8:40 a.m. - 9:00 a.m.

Abstract: The artificial intelligence platforms are being increasingly deployed in safety-critical applications and autonomous systems, self-driving cars, robots, drones, to name a few. Unlike AI at the cloud environments, the AI at these platforms need to perform reliably under changing environmental conditions and robust against different types of noise, while meeting stringent energy and time constraints. The reliability of AI platforms in unreliable environments is therefore a key challenge for deployment of AI in real-time safety-critical systems. This paper will present a broad perspective on how to design AI platforms to achieve this unique goal. First, we will present examples of AI architecture and algorithm that can assist in improving robustness against dynamic environment and noise, natural and adversarial. Next, we will discuss examples of how to make AI platforms robust against hardware induced noise and variation. The preceding discussions will focus on AI based on statistical machine learning models, including, deep learning. Finally, we will present a new generation of AI models that couple statistical learning with dynamical systems and neuro-inspired learning to enhance the reliability of AI models. The talk will conclude with future research opportunities and directions in this area.

Saibal Mukhopadhyay received his B. E. degree in Electronics and Telecommunication Engineering from Jadavpur University, Calcutta, India, in 2000. He received a Ph.D. degree in Electrical and Computer Engineering from Purdue University, West Lafayette, IN, in 2006. He was with the IBM T. J. Watson Research Center, Yorktown Heights, NY as a Research Staff Member. Since September 2007 he has been with the School of Electrical and Computer Engineering at the Georgia Institute of Technology, Atlanta, GA, where he is currently a Joseph M. Pettit Professor of Electrical and Computer Engineering. His current research interests include neuromorphic computing and mixed-signal systems. Dr. Mukhopadhyay received the Office of Naval Research Young Investigator Award in 2012, the National Science Foundation CAREER Award in 2011, the IBM Faculty Partnership Award in 2009 and 2010, the SRC Inventor Recognition Award in 2008, the SRC Technical Excellence Award in 2005, and the IBM PhD Fellowship Award for years 2004-2005. He has received the IEEE Transactions on VLSI Systems (TVLSI) Best Paper Award in 2014, the IEEE Transactions on Component, Packaging, and Manufacturing Technology (TCPMT) Best Paper Award in 2014, the IEEE/ACM International Symposium on Low-power Electronic Design (ISLPED) Best Paper Award in 2014, the International Conference on Computer Design (ICCD) Best Paper Award in 2004, the IEEE Nano Best Student Paper Award in 2003, and multiple Best in Session Awards in SRC TECHCON in 2014 and 2005. He has authored or co-authored over 150 papers in refereed journals and conferences, and has been awarded six (6) U.S. patents. He is a Senior Member of IEEE.

Wed 9:00 a.m. - 9:15 a.m.
Towards Information Theoretic Adversarial Examples: Chia-Yi Hsu (NCHU), Pin-Yu Chen (IBM) and Chia-Mu Yu (NCHU) (Regular Presentation)
Wed 9:15 a.m. - 9:30 a.m.
Explaining Away Attacks Against Neural Networks: Sean Saito, Jin Wang (SAP Asia) (Regular Presentation)
Wed 9:30 a.m. - 10:00 a.m.
Poster Session + Discussion (Poster session)
Wed 10:00 a.m. - 11:30 a.m.
Lunch Break (Break)
Wed 11:30 a.m. - 12:00 p.m.
Poster Session + Discussion (Contd.) (Poster session)
Wed 12:00 p.m. - 12:45 p.m.

Abstract Deep learning has achieved best-in-class performance in many application domains and has been widely used in different scenarios such as self-driving cars, healthcare, and robotics. However, deep neural networks are also vulnerable under adversarial attacks. This talk will introduce new methods for generating adversarial attacks leveraging ADMM (alternating direction method of multipliers) and experimentations in designing adversarial examples in the physical world. The designed physical world adversarial T-shirt to evade neural network detection has been broadly featured and cited in over 100 media outlets including Communications of the ACM, The Register, Boston Globe, New Yorkers, to name a few. Besides advanced attack methods, this talk will discuss a concurrent adversarial training and model compression technique, which can achieve simultaneous model robustness and compactness for the deep learning applications in security-critical and resource-limited computing environment. The second part of the talk will introduce our hardware-aware deep neural network weight pruning method targeting the FPGA platforms and 3D convolutional neural networks for video recognition. Furthermore, the talk will discuss our new privacy-preserving weight pruning techniques.

Bio: Dr. Xue (Shelley) Lin is an assistant professor in the Department of Electrical and Computer Engineering at Northeastern University since 2017. She received her bachelor’s degree in Microelectronics from Tsinghua University, China and her PhD degree from the Department of Electrical Engineering at University of Southern California in 2016. Her research interests include deep learning security and hardware acceleration, machine learning and computing in cyber-physical systems, high-performance and mobile cloud computing systems, and VLSI. Her research work has been recognized by several NSF awards and supported by Air Force Research Lab, Office of Naval Research, and Lawrence Livermore National Lab. She got the best paper award at ISVLSI 2014 and the top paper award at CLOUD 2014.

Wed 12:45 p.m. - 1:00 p.m.

ABSTRACT Target encoding is an effective technique to deliver better performance for machine learning methods, but, existing approaches require significant increase in the learning capacity, thus demand higher computation power and more training data. In this paper, we present a novel and efficient target encoding scheme, MUTE to improve both generalizability and robustness of a target model by understanding the inter-class characteristics of a target dataset. By extracting the confusion-level between the target classes in a dataset, MUTE strategically optimizes the Hamming distances among target encoding. Such optimized target encoding offers higher classification strength for neural network models with negligible computation overhead and without increasing the model size.

Wed 1:00 p.m. - 1:15 p.m.
WARDEN: Warranting Robustness Against Deception in Data Centers: Hazar Yueksel, Ramon Bertran, Alper Buyuktosunoglu (IBM) (Regular Presentation)
Wed 1:15 p.m. - 1:45 p.m.

Abstract: Enhancing model robustness under new and even adversarial environments is a crucial milestone toward building trustworthy and reliable machine learning systems. Current robust training methods such as adversarial training explicitly specify an ``attack'' (e.g., Lp-norm bounded perturbation) to generate adversarial examples during model training in order to improve adversarial robustness. In this work, we take a different perspective and propose a new framework SPROUT, self-progressing robust training. During model training, SPROUT progressively adjusts training label distribution via our proposed parametrized label smoothing technique, making training free of attack generation and more scalable. We also motivate SPROUT using a general formulation based on vicinity risk minimization, which includes many robust training methods as special cases. Compared with state-of-the-art adversarial training methods (PGD and TRADES) under L-infinity-norm bounded attacks and various invariance tests, SPROUT consistently attains superior performance and is more scalable to large neural networks. Our results shed new light on scalable, effective and attack-independent robust training methods.

Bio:
Dr. Pin-Yu Chen is currently a research staff member at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science and M.A. degree in Statistics from the University of Michigan, Ann Arbor, USA, in 2016. He received his M.S. degree in communication engineering from National Taiwan University, Taiwan, in 2011 and B.S. degree in electrical engineering and computer science (undergraduate honors program) from National Chiao Tung University, Taiwan, in 2009.

Dr. Chen’s recent research is on adversarial machine learning and robustness of neural networks. His long-term research vision is building trustworthy machine learning systems. He has published more than 20 papers on trustworthy machine learning at major AI and machine learning conferences, given tutorials at CVPR’20, ECCV’20, ICASSP’20, KDD’19 and Big Data’18, and co-organized several workshops for adversarial machine learning. His research interest also includes graph and network data analytics and their applications to data mining, machine learning, signal processing, and cyber security. He was the recipient of the Chia-Lun Lo Fellowship from the University of Michigan Ann Arbor. He received the NIPS 2017 Best Reviewer Award, and was also the recipient of the IEEE GLOBECOM 2010 GOLD Best Paper Award. Dr. Chen is currently on the editorial board of PLOS ONE.

At IBM Research, Dr. Chen has co-invented more than 20 U.S. patents. In 2019, he received two Outstanding Research Accomplishments on research in adversarial robustness and trusted AI, and one Research Accomplishment on research in graph learning and analysis.

Wed 1:45 p.m. - 2:00 p.m.
Coffee Break + Discussion (Break)
Wed 2:00 p.m. - 3:00 p.m.
Panel Discussion: Pin-Yu Chen (IBM), Akshay Deshpande (Soothsayer Analytics), Xue Lin (Northeastern University); Sean Saito (SAP, Asia); Moderators: Dr. Nandhini Chandramoorthy and Dr. Pradip Bose (IBM) (Panel)
Wed 3:00 p.m. - 3:00 p.m.

Closing remarks and discussion on special journal issue.

Author Information

Pradip Bose (IBM T. J. Watson Research Center)
Nandhini Chandramoorthy (IBM T. J. Watson Research Center)
Augusto Vega (IBM Research)
Karthik Swaminathan (IBM Research)