4

Nesterov Momentum Equivalence Derivation

 2 years ago
source link: https://galenwong.github.io/blog/2020-02-08-nesterov-momentum-equivalence/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Nesterov Momentum Equivalence Derivation

That Missing Piece in the Papers and the Slides

2020-02-08 • 5 min read

During the Winter Quarter of 2020 at UCLA, I am/was taking a class on neural network and deep learning. The course is numbered ECE C147/247. It was a course modified from the famous Stanford CS 231n.

Note: I assume you (the reader) already knows what is momentum update, and how Nesterov Momentum is different from momentum.

In the chapter about optimization on training (the gradient descent stage), we learned about a technique called Nesterov Momentum used to update the weight to improve training speed.

The formulation of Nesterov Momentum update is given as

Updating momentum:

v←αv−ϵ∇θJ(θ+αv)\mathbf{v} \leftarrow \alpha\mathbf{v} - \epsilon\nabla_{\theta} J(\theta + \alpha\mathbf{v})v←αv−ϵ∇θ​J(θ+αv)

Updating parameter:

θ←θ+v\theta \leftarrow \theta + \mathbf{v}θ←θ+v

Here JJJ is the loss function. However, this formulation is difficult to implement in code. In code, we usually have ∇θJ(θ)\nabla_\theta J(\theta)∇θ​J(θ) available to us instead of ∇θJ(θ+αv)\nabla_{\theta} J(\theta + \alpha\mathbf{v})∇θ​J(θ+αv).

To solve this problem, an alternative formulation is given. We first let

θ~old=θold+αvold\tilde{\theta}_\text{old} = \theta_\text{old} + \alpha\mathbf{v}_\text{old}θ~old​=θold​+αvold​

Then, the formulation becomes

vnew=αvold−ϵ∇θ~oldJ(θ~old)\mathbf{v}_\text{new} = \alpha\mathbf{v}_\text{old} - \epsilon\nabla_{\tilde{\theta}_\text{old}} J(\tilde{\theta}_\text{old})vnew​=αvold​−ϵ∇θ~old​​J(θ~old​)
θ~new=θ~old+vnew+α(vnew−vold)\tilde{\theta}_\text{new} = \tilde{\theta}_\text{old} +\mathbf{v}_\text{new} + \alpha(\mathbf{v}_\text{new} - \mathbf{v}_\text{old})θ~new​=θ~old​+vnew​+α(vnew​−vold​)

In class, there were no explanation given on how the 2 formulations are equivalent. Instead, a hint is given to us that the proof can be done through a substitution of variable. This formulation still confuses me a lot.

There are multiple points of confusion here:

  1. How do we calculate

∇θ~oldJ(θ~old)\nabla_{\tilde\theta_\text{old}}J(\tilde\theta_\text{old})∇θ~old​​J(θ~old​)? 2. What is the actual meaning of θ~\tilde\thetaθ~? 3. What are we storing as our parameter?

I then sought help from the Stanford course webpage on this chapter. They pointed to two papers on the derivation of Nesterov momentum. Only one of them (session 3.5) discusses the alternative derivation, but it simply stated the 2 formulation without proof. I understand the target audience of the paper are very likely capable to figure out the equivalence but I am not one of them.

The paper did offer some insight:

the parameters θ~\tilde\thetaθ~ are a completely equivalent replacement of θ\thetaθ.

This offers the answer to confusion 3. We are storing θ~\tilde\thetaθ~ as the parameter. In code, what is given to us the the derivative of J(θ~)J(\tilde\theta)J(θ~) with respect to θ\thetaθ. Using chain rule, we can show that

θ~=θ+αv∴∇θθ~=1\begin{aligned} \tilde\theta &= \theta + \alpha\mathbf{v}\\ \therefore \nabla_{\theta}\tilde\theta &= 1\\ \end{aligned}θ~∴∇θ​θ~​=θ+αv=1​
∇θJ(θ~)=∇θθ~∇θ~J(θ~)∴∇θJ(θ~)=∇θ~J(θ~)\begin{aligned} \nabla_{\theta}J(\tilde\theta) &= \nabla_{\theta}\tilde\theta \nabla_{\tilde\theta}J(\tilde\theta)\\ \therefore \nabla_{\theta}J(\tilde\theta) &= \nabla_{\tilde\theta}J(\tilde\theta)\\ \end{aligned}∇θ​J(θ~)∴∇θ​J(θ~)​=∇θ​θ~∇θ~​J(θ~)=∇θ~​J(θ~)​

This answer confusion 1, since in code we are given gradient of loss with θ~\tilde\thetaθ~ with respect to θ\thetaθ and it is equivalent to gradient with respect to θ~\tilde\thetaθ~. Now back to confusion 2, what is the meaning of θ~\tilde\thetaθ~? Is it okay to use it as the parameter for prediction and testing? The answer is yes. By definition, θ~\tilde\thetaθ~ is simply the parameter after one momentum update. Therefore, we can think of it as a parameter value that is closer to the optimum.

I had to take the time to do the derivation on my own. Here, I offer the derivation of the equivalence. First, we have the substitution and the original formulation.

θ~old=θold+αvold(1)\tag{1} \tilde{\theta}_\text{old} = \theta_\text{old} + \alpha\mathbf{v}_\text{old}θ~old​=θold​+αvold​(1)
vnew=αvold−ϵ∇θoldJ(θold+αvold)(2)\tag{2} \mathbf{v}_\text{new} = \alpha\mathbf{v}_\text{old} - \epsilon\nabla_{\theta_\text{old}} J(\theta_\text{old} + \alpha\mathbf{v}_\text{old})vnew​=αvold​−ϵ∇θold​​J(θold​+αvold​)(2)
θnew=θold+vnew(3)\tag{3} \theta_\text{new} = \theta_\text{old} + \mathbf{v}_\text{new}θnew​=θold​+vnew​(3)

We substitute (1)(1)(1) into (2)(2)(2) and apply the relationship ∇θJ(θ~)=∇θ~J(θ~)\nabla_{\theta}J(\tilde\theta) = \nabla_{\tilde\theta}J(\tilde\theta)∇θ​J(θ~)=∇θ~​J(θ~):

vnew=αvold−ϵ∇θoldJ(θ~old)vnew=αvold−ϵ∇θ~oldJ(θ~old)(4)\mathbf{v}_\text{new} = \alpha\mathbf{v}_\text{old} - \epsilon\nabla_{\theta_\text{old}} J(\tilde\theta_\text{old})\\ \tag{4} \mathbf{v}_\text{new} = \alpha\mathbf{v}_\text{old} - \epsilon\nabla_{\tilde\theta_\text{old}} J(\tilde\theta_\text{old})vnew​=αvold​−ϵ∇θold​​J(θ~old​)vnew​=αvold​−ϵ∇θ~old​​J(θ~old​)(4)

We can rewrite (1)(1)(1) as θ=θ~−αv\theta = \tilde\theta - \alpha\mathbf{v}θ=θ~−αv. Then, we substitute (1)(1)(1) into (3)(3)(3). Note that we have to substitute both θnew\theta_\text{new}θnew​ and θold\theta_\text{old}θold​.

θ~new−αvnew=θ~old−αvold+vnewθ~new=θ~old+vnew+α(vnew−vold)(5)\tilde\theta_\text{new} - \alpha\mathbf{v}_\text{new} = \tilde\theta_\text{old} - \alpha\mathbf{v}_\text{old} + \mathbf{v}_\text{new}\\ \tag{5} \tilde\theta_\text{new} = \tilde\theta_\text{old} + \mathbf{v}_\text{new} + \alpha(\mathbf{v}_\text{new} - \mathbf{v}_\text{old})θ~new​−αvnew​=θ~old​−αvold​+vnew​θ~new​=θ~old​+vnew​+α(vnew​−vold​)(5)

Now we have arrived at the alternative formulation with (4)(4)(4) and (5)(5)(5). Therefore, in the code, the new parameter is θ~new\tilde\theta_\text{new}θ~new​ that is returned instead of θnew\theta_\text{new}θnew​. Again, the idea is that the actual parameter stored and used for evaluation is always updated with the momentum. Therefore, the gradient is taken with loss is the parameter updated with the momentum.


Recommend

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK