Skip to yearly menu bar Skip to main content


Oral

SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier Detection

Yue Zhao · Xiyang Hu · Cheng Cheng · Cong Wang · Changlin Wan · Wen Wang · Jianing Yang · Haoping Bai · Zheng Li · Cao Xiao · Yunlong Wang · Zhi Qiao · Jimeng Sun · Leman Akoglu

Abstract:

Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples with numerous high-stake applications including fraud detection and intrusion detection. Due to the lack of ground truth labels, practitioners often have to build a large number of unsupervised, heterogeneous models (i.e., different algorithms with varying hyperparameters) for further combination and analysis, rather than relying on a single model. How to accelerate the training and scoring on new-coming samples by outlyingness (referred as prediction throughout the paper) with a large number of unsupervised, heterogeneous OD models? In this study, we propose a modular acceleration system, called SUOD, to address it. The proposed system focuses on three complementary acceleration aspects (data reduction for high-dimensional data, approximation for costly models, and taskload imbalance optimization for distributed environment), while maintaining performance accuracy. Extensive experiments on more than 20 benchmark datasets demonstrate SUOD's effectiveness in heterogeneous OD acceleration, along with a real-world deployment case on fraudulent claim analysis at IQVIA, a leading healthcare firm. We open-source SUOD for reproducibility and accessibility.

Chat is not available.