Skip to yearly menu bar Skip to main content


Poster

SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency

Yan Wang · Yuhang Li · Ruihao Gong · Aishan Liu · yanfei wang · Jian Hu · Yongqiang Yao · Yunchen Zhang · tianzi xiaotian · Fengwei Yu · Xianglong Liu

Ballroom B - Position 13

Abstract:

Extensive studies have shown that deep learning models are vulnerable to adversarial and natural noises, yet little is known about model robustness on noises caused by different system implementations. In this paper, we for the first time introduce SysNoise, a frequently occurred but often overlooked noise in the deep learning training-deployment cycle. In particular, SysNoise happens when the source training system switches to a disparate target system in deployments, where various tiny system mismatch adds up to a non-negligible difference. We first identify and classify SysNoise into three categories based on the inference stage; we then build a holistic benchmark to quantitatively measure the impact of SysNoise on 20+ models, comprehending image classification, object detection, instance segmentation and natural language processing tasks. Our extensive experiments revealed that SysNoise could bring certain impacts on model robustness across different tasks and common mitigations like data augmentation and adversarial training show limited effects on it. Together, our findings open a new research topic and we hope this work will raise research attention to deep learning deployment systems accounting for model performance.

Chat is not available.