Learn Before
Formula

Intermediate Variable Formula

In a neural network with a single hidden layer, the computation begins by calculating an intermediate variable for the hidden layer. Assuming the input example is represented by a vector xRd\mathbf{x} \in \mathbb{R}^d and no bias term is included, the intermediate variable zRh\mathbf{z} \in \mathbb{R}^h is obtained by multiplying the hidden layer's weight parameter matrix W(1)Rhimesd\mathbf{W}^{(1)} \in \mathbb{R}^{h imes d} by the input vector:

z=W(1)x\mathbf{z} = \mathbf{W}^{(1)} \mathbf{x}

0

1

Updated 2026-05-06

Contributors are:

Who are from:

Tags

D2L

Dive into Deep Learning @ D2L