Learn Before
Formula
Gradient of Objective Function with Respect to Loss and Regularization Term
The first step of backpropagation in a regularized neural network is to calculate the gradients of the overall objective function with respect to its individual components: the single-example loss term and the regularization term . Since the objective function is a simple sum of these two scalar values, its partial derivative with respect to each term is exactly :
0
1
Updated 2026-05-06
Tags
D2L
Dive into Deep Learning @ D2L