Oral
Understanding the Downstream Instability of Word Embeddings
Megan Leszczynski · Avner May · Jian Zhang · Sen Wu · Christopher Aberger · Christopher Re

Mon Mar 2nd 05:20 -- 05:45 PM @ Ballroom A

Many industrial machine learning (ML) systems require frequent retraining to keep up-to-date with constantly changing data. This retraining exacerbates a large challenge facing ML systems today: model training is unstable, i.e., small changes in training data can cause significant changes in the model's predictions. In this paper, we work on developing a deeper understanding of this instability, with a focus on how a core building block of modern natural language processing (NLP) pipelines---pre-trained word embeddings---affects the instability of downstream NLP models. We first empirically reveal a tradeoff between stability and memory: increasing the embedding memory 2x can reduce the disagreement in predictions due to small changes in training data by 5% to 39% (relative). To theoretically explain this tradeoff, we introduce a new measure of embedding instability---the eigenspace instability measure. We relate the eigenspace instability measure to downstream instability by proving a bound on the disagreement in downstream predictions introduced by the change in word embeddings. Practically, we show that the eigenspace instability measure can be a cost-effective way to choose embedding parameters to minimize instability without training downstream models, achieving up to 3.71x lower error rates than existing embedding distance measures. Finally, we demonstrate that the observed stability-memory tradeoffs extend to other types of embeddings as well, including knowledge graph and contextual word embeddings.

Author Information

Megan Leszczynski (Stanford University)
Avner May (Stanford University)
Jian Zhang (SambaNova Systems)
Sen Wu (Stanford University)
Christopher Aberger (SambaNova Systems and Stanford University)
Christopher Re (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors