Learn Before
Concept

PyTorch Autograd

There are 4 main steps when training a neural network: assemble graph, forward propagation, compute loss, backward propagation. The most complex and computationally difficult one is the backward propagation. After calling .backward() on the loss tensor, Autograd automatically calculates and stores the gradients for each models parameter and stores it in a .grad attribute so it can be accessed. This makes training a model only take a few lines of code but still be customizable to a variety of loss functions and optimizers.

0

1

Updated 2021-10-09

Tags

Data Science

Related