In the process of investigating who is responsible for a model’s poor functioning, the first step is to inspect the output layer and compute the derivative of the loss function with respect to it.

Before that, let’s consider the partial derivative of the loss function with respect to one of the output layer neuron’s output.

  • Where refers to the one among the neurons that is the true class label.

This can be rewritten using the indicator notation:

If we simply apply this to all the neurons in the output, we get the gradient vector with respect to .

  • Where is a k-dimensional vector, whose element is 1 and the rest are 0.

What we’re actually interested is in the gradient of the loss function with respect to the pre-activation part of the output layer , since the final output simply takes and applies the softmax function on it.

This ultimately works out to:

  • The derivation for this is explained in lecture 3.5 of week 3.

This gives us the partial derivative of the loss function with respect to the element of . Using this, we can write the gradient vector of the loss function with respect to the output layer: