Calculus Behind Gradient Descent
2022-02-12 . 6 min read
Derivative and Partial Derivative Intuition
We’ve always heard in college lectures that derivative gives us the rate of change or gives us the slope of the function or is the rise over run and all. We all know these mean the same thing which tells us how fast or slow the output changes when its input changes.
Derivative of Function of Single Variable
A straight line with the equation: f(x) = has a constant slope of which means for any point on the straight line, changing the value of by some amount, would result in 2 times the change in x as the change in y. This is rather a simple example, let's dive into a more complicated one. Consider the equation of a parabola: whose plot looks like:
Now, while you continue reading, imagine you are standing at the base or the minima of the parabola at at which the derivative , which means even if you step a little bit towards the right or left i.e. in the direction, you would not have moved much in the direction. Notice that I wrote not moved much and not zero even though the change in given by the derivative at must be zero. This is because, the definition of derivative from the first principle is the limiting value of rise - over run - as →0.
To be more clear, say you are still at and you move a little bit towards the right by an amount 0.01, so the change in is 0.01, the change in y is . Hence, the derivative, rise over run is .
Now as we decrease i.e. the change in , the value of the derivative moves closer and closer to 0. Lets take another example with smaller value of , the change in is , so rise over run is . Thus, we see that the value for rise/run gets closer and closer to 0 as we decrease the change in . So, we have to understand that the derivative is the limiting value of the ratio rise/run as we decrease the run or change in .
Now, let's assume that you climbed up the curve and reached the point , the curve is a little bit steep at this point than it was at and has a slope of which means if you increase your step by a little amount from here, say you took a step , then you would rise by an amount of x = and as you move up, the steepness increases and even a small step in direction would cause you to rise by a significant amount in direction.
Derivative of Function of Multiple Variables
Let's jump towards the function of multiple variables, and since we were walking on a parabola before, I thought it would be interesting to walk on a paraboloid now. Below we can see the shape of a paraboloid with equation: .
Now, how do we calculate the derivative of multivariable functions?
Here, partial derivatives come to our rescue. For a function of multiple variables, we differentiate the function with respect to each of the variables pretending all other variables as constant. So for a function of n variables, we get n partial derivatives. Hence, for a paraboloid which is a function of 2 variables, we get 2 partial derivatives: one with respect to and the other with respect to .
Understanding Partial Derivatives
Khanacademy has a brilliant content on the topic here.
Understand that the derivative of a function of a single variable is the building block of the derivative of a function of multiple variables. From the first principles, the partial derivative can be defined as:
While understanding partial derivatives graphically, when we say pretend as a constant when differentiating the function with respect to , imagines slicing the plot of the function with a plane such that the plane is parallel to and perpendicular to , and thus for every point on the plane, would have the constantvalue. Now, increase your visualizing capacity and imagine what would we see on the cross-section when this plane slices off the paraboloid 🤔. We would see a parabola. For every point on the parabola, . If we sliced the paraboloid with a plane parallel to the -axis, intersecting - at , then for every point on the parabola at the sliced section we would get sets of points like (x, 6). So the slope of this parabola would depend only on the changing values of and even if that curve had a mind of its own and wanted its slope to be dependent on , it would never depend on it because the sliced parabola is stuck with only one value of y.
Now, the derivative of the function with respect to all the variables is:
or more formally, the change in function with small changes in the variables is:
For the paraboloid above, at point (1,2), the value of the function is , for small changes , the change in the function is . Now, the derivative of the function according to the rules of partial derivative is
Gradient of a function
I won’t be going into much depth about Directional Derivative and Vector Fields but those who want to understand them in a more intuitive manner check out this → Directional Derivative on Khanacademy.
As Mr. Grant Sanderson says in **this** video on Khanacademy
Gradient is a way of packing all the partial derivative information of a function.
We use nabla, the upside-down delta symbol → to represent the gradient operator and, the gradient's output, when operated on a function , is a vector of partial derivatives of a function with respect to all its variables. Let's work with a function of variables for now.
For a function of variables, gradient outputs us a vector of dimensions. We know, we can image a vector as an arrow pointing at a specific direction and its length giving us its magnitude, but where does this vector given by the gradient point to?
This is where things get interesting, the gradient vector points to the direction to the maximum change in function from the point (x, y) where the gradient was calculated or the direction of the steepest ascent. Let me help you imagine this.
Consider, the following function .
The gradient of this vector is :
The gradient of the function at is:
Below we can see a contour plot of the above function and the gradient vector at point . A contour plot shows the shape of the cross-section of the function when sliced by a plane parallel to plane for a function of 2 variables, every point on a specific locus on the contour, gives us the same value of the function. For better intuition and further understanding, this video is recommended.
So, we can say that each element in the gradient vector tells us the direction and distance to move along the axis of that component. For, the above example, the value of gradient vector at (1,2) was , so from that point (1,2), we move 3 units in the positive direction and 4 units in negative to ascend by the maximum amount.
If you are still confused with all this, imagine a function of single variable, imagining a parabola which has a derivative and for positive and negative values of , calculate the value of the derivative and notice which direction would the gradient vector move along which increases the function.
Vector Fields
We know that the gradient is a vector of the partial derivatives and when we draw the downscaled gradient vector for every point on the function, we get a vector field where each vector points to the direction of the steepest ascent. Below, we can see the gradient vector field for the function .
The above image is clipped from here
Directional Derivative
Suppose you are climbing a hill, at a certain arbitrary point on the hill, there are many directions you could take to climb the hill and for every different direction you take, you ascend by a different amount. We can calculate this amount of change in different directions by calculating the derivative along all these directions using directional derivative but among all these, there’s one specific direction that gives us the maximum change. As already stated this direction is the direction pointed out by the gradient vector. The directional derivative is calculated from the dot product of the gradient and a directional vector. To understand why actually the gradient points to the direction of maximum change or steepest ascent, this content from Khanacademy is recommended.
Applying it all to ML
In ML, we use an optimization function which we call the cost function. The cost function is a function of the learning parameters or the . A simple cost function used to calculate the cost of prediction by a model for a value of is the mean square cost function:
We want the cost we pay for our prediction i.e the measure of the difference in our predicted value from the actual value to be as small as possible. To get the minimum value for this cost function as much as we can, we can use the same theory of gradient which gives us the direction of maximum ascent but instead of ascending we go in the opposite direction and descent at the minimum value i.e. at the minima of the function. So, we iterate a number of times, on each iteration we find the gradient, we descend, and update theta with a new set of values until the cost converges to an optimum value and the ideal goal is to reach the minima where the cost is zero. Gradient descent is applied as:
We know the gradient gives a vector pointing to the direction of the maximum increase of function, so each component of the vector must also point in the direction of maximum change along the axis. The cost function is a function of the parameters , so what we are doing in gradient descent is for every parameter , let's consider the locus of the cost function along the axis of is a parabola (for a paraboloid function).
The derivative is , so for positive , the derivative tells us to increase for increasing , and for negative , the derivative tells us to decrease for increasing , but what we want is the decrease of J, so notice the “-” sign in the implementation of gradient descent above, that means, move in the opposite direction to that of the gradient and the value scales up the magnitude of movement along the direction. Since we are scaling up our step and not moving by the same amount of the gradient, we do not move in a smooth straight line towards the minima but take zig-zag but bigger steps than the gradient which can be seen on the contour plot below considering that the cost function is a paraboloid.
References
- Khanacademy Multivariabe Derivatives
- Essence of Calculus - 3b1b
- The Internet
Hey I assume you finished reading, I would love to know your feedback or if found any error or mistake in this blog post, please do not hesitate to reach out to me.