Skip to yearly menu bar Skip to main content


Contributed Talk 1
in
Workshop: Journal of Opportunities, Unexpected limitations, Retrospectives, Negative results, and Experiences

Applying Maximal Coding Rate Reduction to Text classification

Yuxin Liang


Abstract:

Text classification is one of the fundamental tasks in natural language processing (NLP), and recent deep learning models have made great progress in this area. However, the text features obtained by common NLP models such as Transformers, textCNN, etc., suffer from a high degree of anisotropy which degenerates the expressiveness of the learned features. Maximal Coding Rate Reduction (MCR2) principle is to maximize the coding rate difference between the whole dataset and the sum of each individual class which can lead to more isotropic representation. We try to migrate MCR2 principle from image classification to text classification. The results show that applying MCR2 principle enables models to obtain more uniform and inter-category orthogonal text embeddings, but at the same time reduces the accuracy of text classification.