Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Workshop on Decentralized and Collaborative Learning

Example Selection for Distributed Learning [Chris De Sa]

Christopher De Sa


Abstract:

Training example order in SGD has long been known to affect convergence rate. Recent results show that accelerated rates are possible in a variety of cases for permutation-based sample orders, in which each example from the training set is used once before any example is reused. This talk will cover a line of work in my lab on decentralized learning and sample-ordering schemes. We will discuss the limits of the classic gossip algorithm and random-reshuffling schemes and explore how both can be improved to make SGD converge faster both in theory and in practice with little overhead.

Chat is not available.