DoG is SGD’s Best Friend: A Parameter-Free Dynamic Step Size Schedule

Maor Ivgi, Oliver Hinder, Yair Carmon

Published in ICML (2023)

We propose a tuning-free dynamic SGD step size formula, which we call Distance over Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from the initial point and norms of gradients) and have no ``learning rate’’ parameter. Theoretically, we show that a slight variation of the DoG formula enjoys strong parameter-free convergence guarantees for stochastic convex optimization assuming only locally bounded stochastic gradients. Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG’s performance is close to that of SGD with tuned learning rate. We also propose a per-layer variant of DoG that generally outperforms tuned SGD, approaching the performance of tuned Adam.

See paper page here or download the pdf directly.

Checkout the paper repository in github.

Cite as

@article{ivgi2023dog, 
title={{D}o{G} is {SGD}'s Best Friend: A Parameter-Free Dynamic Step Size Schedule},
author={Maor Ivgi and Oliver Hinder and Yair Carmon},
journal={arXiv:2302.12022},
year={2023},
}