Papers
arxiv:2307.13813

How to Scale Your EMA

Published on Jul 25, 2023
· Submitted by akhaliq on Jul 28, 2023
#3 Paper of the day
Authors:
,
,
,
,
,

Abstract

Preserving training dynamics across batch sizes is an important tool for practical machine learning as it enables the trade-off between batch size and wall-clock time. This trade-off is typically enabled by a scaling rule, for example, in stochastic gradient descent, one should scale the learning rate linearly with the batch size. Another important tool for practical machine learning is the model Exponential Moving Average (EMA), which is a model copy that does not receive gradient information, but instead follows its target model with some momentum. This model EMA can improve the robustness and generalization properties of supervised learning, stabilize pseudo-labeling, and provide a learning signal for Self-Supervised Learning (SSL). Prior works have treated the model EMA separately from optimization, leading to different training dynamics across batch sizes and lower model performance. In this work, we provide a scaling rule for optimization in the presence of model EMAs and demonstrate its validity across a range of architectures, optimizers, and data modalities. We also show the rule's validity where the model EMA contributes to the optimization of the target model, enabling us to train EMA-based pseudo-labeling and SSL methods at small and large batch sizes. For SSL, we enable training of BYOL up to batch size 24,576 without sacrificing performance, optimally a 6times wall-clock time reduction.

Community

Paper author
This comment has been hidden

@dbusbridge (cc @Sylvestre )

netm=i=0Hwimo~inet_m = \sum_{i=0}^H w_{im} \tilde{o}_i

(display mode, surround by $$)

and here is it in inline mode \ o_m = y_m = f_m(net_m) \ hello

EDIT: inline mode does not seem to work right now, will investigate

Paper author

@dbusbridge (cc @Sylvestre )

$$net_m = \sum_{i=0}^H w_{im} \tilde{o}_i$$

(display mode, surround by $$)

and here is it in inline mode \ o_m = y_m = f_m(net_m) \ hello

EDIT: inline mode does not seem to work right now, will investigate

Awesome, thank you!

Mastering EMA for Large-Scale Machine Learning

Links 🔗:

👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2307.13813 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2307.13813 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2307.13813 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.