34

Classical Neural Networks: What does a Loss Function Landscape look like?

 4 years ago
source link: https://mc.ai/classical-neural-networks-what-does-a-loss-function-landscape-look-like/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
(rights: source )

Classical Neural Networks: What does a Loss Function Landscape look like?

Ever wondered on what kind of topology we were optimising our neural networks on? Well now you know!

Every neural network’s objective/loss function is to be minimized! But what does this loss function really look like? Today, we will be showing the loss function for two different neural networks (N1, N2: fig.1).

fig.1 (rights: own image)

The family of loss function we will when training are MSE (Mean Squared Error). Although other loss function family might be interesting, we will stick with this one for the purpose of illustration.

For people that are extra curious, we will be training N2 neural network (training part is not very interesting since, again what we want is a landscape illustration) on this distribution (fig.2: and yes, I am too lazy to add noise)

fig.2 (rights: own image)

And N1 neural network (fig.3):

fig.3 (rights: own image)

Loss function value as a function of the input (N2)

Let’s simply plot the loss function itself to begin with (fig.4).

fig.4 (rights: own image)

Remarks to be made:

  • We see that error values are particularly high around x=-2 and y in [-1,1].
  • Except seeing what a loss function looks like, having such an illustration can be useful for someone who wants to purposefully attack such a neural network! For an adversarial this can be a first exploratory step.

Loss function value as a function of weights (N1)

Being able to see loss function as a function of input is nice, but not exactly what people would be interested in. Seeing the landscape for optimisation is definitely better for crafting an architecture! Now as mentioned in my previous article N1 has 7 weights scalar to optimise on. Plotting a 7 dimension would only have very little point for our understanding, so we will be arbitrarily projecting on two dimension. Note that we fix the input, so that the variables are only the two weights. (fig.5)

fig.5 (rights: own image)

This is the landscape for one data point. Multiple things are to note:

  • If we were to optimise (minimum) on this plot, the two weights arbitrarily picked, then the loss for the one data point would obviously diminish.
  • Now to minimize such a function, any simple gradient search would be enough, and not even a SGD would be needed since it is a strict convex distribution but since it has a plateau, we would need some momentum descent instead of general descent to have a big enough gradient direction.

Such a plot can be done to multiple data points when using MSE as a loss metric. Being able to picture losses in general neural network is harder, because of the number of weights, but this can be a way to not randomly try optimisation algorithms when training, and instead understand the underlying data model that you want to approach. I hope again that this help people understand that Machine Learning is not black magic and truly requires analysis! Hyperparameter finding does not have to be random trials.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK