Using the Taylor Series, we established that changes made to the weights and bias must be in the direction opposite to that of the gradient vector. The gradient vector is the derivative of the loss function with respect to the weights and bias.
After iteration , the algorithm to update the weight at the iteration should be:
Where:
- Both derivatives are calculated at and .
The pseudocode for this algorithm is:
What is the formula to compute the gradient for the sigmoid function, which has been the focus till now? Remember that . Assuming there’s only one input point, the loss function is:
The derivative can therefore be calculated as:
Recall from the good ol’ days that . Using the chain rule, upon computing the derivative of the sigmoid function, we get:
Similarly:
For two or more points:
These 2 derivates can be plugged into the update formula to get the new weight vector. The following 3D plot shows the loss gradually decreasing as the weights and bias change:
Python code for the Gradient Descent Model from start to finish: