Formula

Gradient of Objective Function with Respect to Loss and Regularization Term

The first step of backpropagation in a regularized neural network is to calculate the gradients of the overall objective function J=L+sJ = L + s with respect to its individual components: the single-example loss term LL and the regularization term ss. Since the objective function is a simple sum of these two scalar values, its partial derivative with respect to each term is exactly 11: JL=1  and  Js=1\frac{\partial J}{\partial L} = 1 \; \textrm{and} \; \frac{\partial J}{\partial s} = 1

0

1

Updated 2026-05-06

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L