Johns Hopkins Algorithms and Complexity
https://www.cs.jhu.edu/~mdinitz/theory
America/New_York
America/New_York
America/New_York
20221106T020000
-0400
-0500
EST
20230312T020000
-0500
-0400
EDT
ai1ec-338@www.cs.jhu.edu/~mdinitz/theory
20221126T113322Z
Speaker: Jingfeng Wu
Affiliation: Johns Hopkins University
Title: Direction Matters: On the Implicit Regularization Effect of Stochastic Gradient Descent with Moderate Learning Rate
Abstract:
Understanding the algorithmic regularization effect of stochastic gradient descent (SGD) is one of the key challenges in modern machine learning and deep learning theory. Most of the existing works, however, focus on very small or even infinitesimal learning rate regime, and fail to cover practical scenarios where the learning rate is moderate and annealing. In this paper, we make an initial attempt to characterize the particular regularization effect of SGD in the moderate learning rate regime by studying its behavior for optimizing an overparameterized linear regression problem. In this case, SGD and GD are known to converge to the unique minimum-norm solution; however, with the moderate and annealing learning rate, we show that they exhibit different directional bias: SGD converges along the large eigenvalue directions of the data matrix, while GD goes after the small eigenvalue directions. Furthermore, we show that such directional bias does matter when early stopping is adopted, where the SGD output is nearly optimal but the GD output is suboptimal. Finally, our theory explains several folk arts in practice used for SGD hyperparameter tuning, such as (1) linearly scaling the initial learning rate with batch size; and (2) overrunning SGD with high learning rate even when the loss stops decreasing.
20201104T120000
20201104T130000
0
[Theory Seminar] Jingfeng Wu
free