\(\def\mymacro{{\mathbf{\alpha,\beta,\gamma}}}\)
\(\def\va{{\mathbf{a}}}\)
\(\def\vb{{\mathbf{b}}}\)
\(\def\vc{{\mathbf{c}}}\)
\(\def\vd{{\mathbf{d}}}\)
\(\def\ve{{\mathbf{e}}}\)
\(\def\vf{{\mathbf{f}}}\)
\(\def\vg{{\mathbf{g}}}\)
\(\def\vh{{\mathbf{h}}}\)
\(\def\vi{{\mathbf{i}}}\)
\(\def\vj{{\mathbf{j}}}\)
\(\def\vk{{\mathbf{k}}}\)
\(\def\vl{{\mathbf{l}}}\)
\(\def\vm{{\mathbf{m}}}\)
\(\def\vn{{\mathbf{n}}}\)
\(\def\vo{{\mathbf{o}}}\)
\(\def\vp{{\mathbf{p}}}\)
\(\def\vq{{\mathbf{q}}}\)
\(\def\vr{{\mathbf{r}}}\)
\(\def\vs{{\mathbf{s}}}\)
\(\def\vt{{\mathbf{t}}}\)
\(\def\vu{{\mathbf{u}}}\)
\(\def\vv{{\mathbf{v}}}\)
\(\def\vw{{\mathbf{w}}}\)
\(\def\vx{{\mathbf{x}}}\)
\(\def\vy{{\mathbf{y}}}\)
\(\def\vz{{\mathbf{z}}}\)
\(\def\vmu{{\mathbf{\mu}}}\)
\(\def\vsigma{{\mathbf{\sigma}}}\)
\(\def\vtheta{{\mathbf{\theta}}}\)
\(\def\vzero{{\mathbf{0}}}\)
\(\def\vone{{\mathbf{1}}}\)
\(\def\vell{{\mathbf{\ell}}}\)
\(\def\mA{{\mathbf{A}}}\)
\(\def\mB{{\mathbf{B}}}\)
\(\def\mC{{\mathbf{C}}}\)
\(\def\mD{{\mathbf{D}}}\)
\(\def\mE{{\mathbf{E}}}\)
\(\def\mF{{\mathbf{F}}}\)
\(\def\mG{{\mathbf{G}}}\)
\(\def\mH{{\mathbf{H}}}\)
\(\def\mI{{\mathbf{I}}}\)
\(\def\mJ{{\mathbf{J}}}\)
\(\def\mK{{\mathbf{K}}}\)
\(\def\mL{{\mathbf{L}}}\)
\(\def\mM{{\mathbf{M}}}\)
\(\def\mN{{\mathbf{N}}}\)
\(\def\mO{{\mathbf{O}}}\)
\(\def\mP{{\mathbf{P}}}\)
\(\def\mQ{{\mathbf{Q}}}\)
\(\def\mR{{\mathbf{R}}}\)
\(\def\mS{{\mathbf{S}}}\)
\(\def\mT{{\mathbf{T}}}\)
\(\def\mU{{\mathbf{U}}}\)
\(\def\mV{{\mathbf{V}}}\)
\(\def\mW{{\mathbf{W}}}\)
\(\def\mX{{\mathbf{X}}}\)
\(\def\mY{{\mathbf{Y}}}\)
\(\def\mZ{{\mathbf{Z}}}\)
\(\def\mStilde{\mathbf{\tilde{\mS}}}\)
\(\def\mGtilde{\mathbf{\tilde{\mG}}}\)
\(\def\mGoverline{{\mathbf{\overline{G}}}}\)
\(\def\mBeta{{\mathbf{\beta}}}\)
\(\def\mPhi{{\mathbf{\Phi}}}\)
\(\def\mLambda{{\mathbf{\Lambda}}}\)
\(\def\mSigma{{\mathbf{\Sigma}}}\)
\(\def\tA{{\mathbf{\mathsf{A}}}}\)
\(\def\tB{{\mathbf{\mathsf{B}}}}\)
\(\def\tC{{\mathbf{\mathsf{C}}}}\)
\(\def\tD{{\mathbf{\mathsf{D}}}}\)
\(\def\tE{{\mathbf{\mathsf{E}}}}\)
\(\def\tF{{\mathbf{\mathsf{F}}}}\)
\(\def\tG{{\mathbf{\mathsf{G}}}}\)
\(\def\tH{{\mathbf{\mathsf{H}}}}\)
\(\def\tI{{\mathbf{\mathsf{I}}}}\)
\(\def\tJ{{\mathbf{\mathsf{J}}}}\)
\(\def\tK{{\mathbf{\mathsf{K}}}}\)
\(\def\tL{{\mathbf{\mathsf{L}}}}\)
\(\def\tM{{\mathbf{\mathsf{M}}}}\)
\(\def\tN{{\mathbf{\mathsf{N}}}}\)
\(\def\tO{{\mathbf{\mathsf{O}}}}\)
\(\def\tP{{\mathbf{\mathsf{P}}}}\)
\(\def\tQ{{\mathbf{\mathsf{Q}}}}\)
\(\def\tR{{\mathbf{\mathsf{R}}}}\)
\(\def\tS{{\mathbf{\mathsf{S}}}}\)
\(\def\tT{{\mathbf{\mathsf{T}}}}\)
\(\def\tU{{\mathbf{\mathsf{U}}}}\)
\(\def\tV{{\mathbf{\mathsf{V}}}}\)
\(\def\tW{{\mathbf{\mathsf{W}}}}\)
\(\def\tX{{\mathbf{\mathsf{X}}}}\)
\(\def\tY{{\mathbf{\mathsf{Y}}}}\)
\(\def\tZ{{\mathbf{\mathsf{Z}}}}\)
\(\def\gA{{\mathcal{A}}}\)
\(\def\gB{{\mathcal{B}}}\)
\(\def\gC{{\mathcal{C}}}\)
\(\def\gD{{\mathcal{D}}}\)
\(\def\gE{{\mathcal{E}}}\)
\(\def\gF{{\mathcal{F}}}\)
\(\def\gG{{\mathcal{G}}}\)
\(\def\gH{{\mathcal{H}}}\)
\(\def\gI{{\mathcal{I}}}\)
\(\def\gJ{{\mathcal{J}}}\)
\(\def\gK{{\mathcal{K}}}\)
\(\def\gL{{\mathcal{L}}}\)
\(\def\gM{{\mathcal{M}}}\)
\(\def\gN{{\mathcal{N}}}\)
\(\def\gO{{\mathcal{O}}}\)
\(\def\gP{{\mathcal{P}}}\)
\(\def\gQ{{\mathcal{Q}}}\)
\(\def\gR{{\mathcal{R}}}\)
\(\def\gS{{\mathcal{S}}}\)
\(\def\gT{{\mathcal{T}}}\)
\(\def\gU{{\mathcal{U}}}\)
\(\def\gV{{\mathcal{V}}}\)
\(\def\gW{{\mathcal{W}}}\)
\(\def\gX{{\mathcal{X}}}\)
\(\def\gY{{\mathcal{Y}}}\)
\(\def\gZ{{\mathcal{Z}}}\)
\(\def\sA{{\mathbb{A}}}\)
\(\def\sB{{\mathbb{B}}}\)
\(\def\sC{{\mathbb{C}}}\)
\(\def\sD{{\mathbb{D}}}\)
\(\def\sF{{\mathbb{F}}}\)
\(\def\sG{{\mathbb{G}}}\)
\(\def\sH{{\mathbb{H}}}\)
\(\def\sI{{\mathbb{I}}}\)
\(\def\sJ{{\mathbb{J}}}\)
\(\def\sK{{\mathbb{K}}}\)
\(\def\sL{{\mathbb{L}}}\)
\(\def\sM{{\mathbb{M}}}\)
\(\def\sN{{\mathbb{N}}}\)
\(\def\sO{{\mathbb{O}}}\)
\(\def\sP{{\mathbb{P}}}\)
\(\def\sQ{{\mathbb{Q}}}\)
\(\def\sR{{\mathbb{R}}}\)
\(\def\sS{{\mathbb{S}}}\)
\(\def\sT{{\mathbb{T}}}\)
\(\def\sU{{\mathbb{U}}}\)
\(\def\sV{{\mathbb{V}}}\)
\(\def\sW{{\mathbb{W}}}\)
\(\def\sX{{\mathbb{X}}}\)
\(\def\sY{{\mathbb{Y}}}\)
\(\def\sZ{{\mathbb{Z}}}\)
\(\def\E{{\mathbb{E}}}\)
\(\def\jac{{\mathbf{\mathrm{J}}}}\)
\(\def\argmax{{\mathop{\mathrm{arg}\,\mathrm{max}}}}\)
\(\def\argmin{{\mathop{\mathrm{arg}\,\mathrm{min}}}}\)
\(\def\Tr{{\mathop{\mathrm{Tr}}}}\)
\(\def\diag{{\mathop{\mathrm{diag}}}}\)
\(\def\vec{{\mathop{\mathrm{vec}}}}\)
\(\def\Kern{{\mathop{\mathrm{Kern}}}}\)
\(\def\llbracket{⟦}\)
\(\def\rrbracket{⟧}\)

Research Topics: Deep Learning Beyond the Gradient

Table of Contents

(Download as pdf)

Keywords: Automatic differentiation, Hessian, Fisher, K-FAC, second-order optimization, Newton's method

TL;DR: Contemporary deep learning (DL) algorithms rely on the gradient. During my PhD, I developed tools to efficiently compute quantities beyond the gradient—higher-order information—like gradient statistics over a mini-batch and Hessian approximations. This information enables a richer view onto the loss landscape, quantifying its noise (stochasticity) and curvature (geometry). During my Postdoc, I want to explore the potential of higher-order information for building more powerful DL methods and further improve automatic differentiation.

Interested in similar topics? Let's chat!

No time to read on? Don't worry. I'll try to find and reach out to you ;)

(details below, feel free to skip)

Improving Automatic Differentiation Frameworks

Next-generation ML frameworks should be able to seamlessly and efficiently compute not only the gradient, but also higher-order information. Currently, their computation often requires workarounds that are inefficient or complicated to implement, reducing the availability to users. I want to identify shortcomings in existing AD libraries to improve their design and performance and make higher-order information accessible to the community. Ideas:

  • Make frameworks more aware of (i) linear algebra structure and (ii) hardware to automate many numerical tricks and optimizations (for instance optimizing contraction schedules, partial operand access/slicing, \(\dots\)).
  • Develop representations for higher-order derivatives that simplify graph analysis and optimization.
  • Explore the implications of new concepts, for instance distinguishing between co- and contra-variant indices of tensors rather than treating them as multi-dimensional arrays [1].

Establishing Second-order Methods

Note: 'Second-order method' might refer to optimization, or more broadly any method motivated by a quadratic Taylor approximation (for instance for model compression [2] or Laplace approximations [3]).

DL is dominated by first-order (gradient-based) methods. Exploring methods that rely on other information has been prohibitively expensive and complicated until recently due to software constraints. I want to investigate the potential of higher-order information such as curvature and noise for DL applications. Ideas:

  • Identify challenges for second-order optimization methods in the DL regime and work on fixes.
    • What do "Newton's method" and "air travel" have in common? Both are very fast, but their worst-case is bad! (slide 97) Is noise a problem for their stability in the mini-batch setting? Can we develop techniques to improve stability, for instance removing the need for damping (fractional NGD [4]), applying momentum to Newton steps (determinantal averaging [5]), or designing other safeguard heuristics?
    • Is there theory on the impact of noise in the convex setting?
    • Is their implicit bias a problem for generalization?
    • Reducing computational cost, for instance with 'mixed training' (split NN parameters into two groups, train one group with a first-order method, the other with a second-order method).
  • Identify new tasks where higher-order information is useful (or even required as first-order methods fail).
  • Hyperparameter adaptation with higher-order information

Empirical Studies and Novel Approximations of Higher-order Information

Empirical investigations of DL quantities have observed pronounced patterns, for instance in the Hessian's spectrum [6][8]. I want to investigate and understand the origin of structure in higher-order information. This enables the development of efficient representations that specifically tackle the approximation of relevant terms. Ideas:

  • Using tiny subspace, class-structure, and layer structure for new approximations. Numerous empirical works observed the Hessian's eigenvalue spectrum to cluster into groups [6][8]. The sizes of those groups are affected by the DNN's output dimension \(C\) (number of classes). [9] observed that after a few steps of training, the gradient resides mostly in the first group—the Hessian's top-\(C\) eigenspace. Keeping track of interactions between gradient and Hessian can therefore be sped up by projection onto that tiny subspace.
    • [10] attributed these groups to terms in a hierarchical decomposition of the Hessian. The term responsible for the top-\(C\) eigenvalues can be computed relatively efficiently. The tiny subspace seems to not change much during training. This allows for constructing light-weight, high quality Hessian approximations that can be recycled in later iterations (momentum).
  • Understand why using gradients to approximate curvature seems to work in DL. For instance: The empirical Fisher (EF) is a popular approximation of the Fisher/Hessian built from gradients. [11] shows that, in general, it can be dangerous to use the EF instead of the Fisher for optimization. Empirically, however, the EF performs well in DL applications [2]. Also, the EF and Fisher/GGN look quite similar for a DNN (visually). Maybe both matrices share important properties in the DL setting, despite the concerns of [11]?
  • Investigate evolution during training for phase-dependent algorithms. By monitoring such quantities during training, one can identify different regimes and design specialized methods for each [12].

References

[1]
A. Kristiadi, F. Dangel, and P. Hennig, “The geometry of neural nets’ parameter spaces under reparametrization.” 2023.
[2]
S. P. Singh and D. Alistarh, “Woodfisher: Efficient second-order approximation for neural network compression,” 2020.
[3]
E. Daxberger, A. Kristiadi, A. Immer, R. Eschenhagen, M. Bauer, and P. Hennig, “Laplace redux - effortless bayesian deep learning,” 2021.
[4]
D. Huh, “Curvature-corrected learning dynamics in deep neural networks,” 2020.
[5]
M. Derezinski and M. W. Mahoney, “Distributed estimation of the inverse hessian by determinantal averaging,” in Advances in neural information processing systems (neurips), 2019.
[6]
V. Papyan, “The full spectrum of deepnet hessians at scale: Dynamics with sgd training and sample size.” 2019.
[7]
L. Sagun, U. Evci, V. U. Guney, Y. Dauphin, and L. Bottou, “Empirical analysis of the hessian of over-parametrized neural networks.” 2018.
[8]
L. Sagun, L. Bottou, and Y. LeCun, “Eigenvalues of the hessian in deep learning: Singularity and beyond.” 2017.
[9]
G. Gur-Ari, D. A. Roberts, and E. Dyer, “Gradient descent happens in a tiny subspace.” 2018.
[10]
V. Papyan, “Measurements of three-level hierarchical structure in the outliers in the spectrum of deepnet Hessians,” 2019.
[11]
F. Kunstner, P. Hennig, and L. Balles, “Limitations of the empirical fisher approximation for natural gradient descent,” 2019.
[12]
L. Tatzel, P. Hennig, and F. Schneider, “Late-phase second-order training,” 2022.

Author: Felix Dangel

Created: 2023-04-18 Tue 17:31

Validate