Concept

Formulation of a Feedforward Neural Networks

LL = # of layers mm = # of training datapoints n[l]=n^{[l]} = # of units (neurons) in layer ll A[l]=g[l](Z[l])=A^{[l]} = g^{[l]}(Z^{[l]}) = activations (outputs) in layer ll Z[l],A[l]:(n[l],m)Z^{[l]}, A^{[l]}: (n^{[l]},m) X=A[0]X = A^{[0]} Y^=A[L]\hat{Y} = A^{[L]} W[l]=W^{[l]} = Weights for Z[l]Z^{[l]} W[l]:(n[l],n[l1])W^{[l]}: (n^{[l]}, n^{[l - 1]}) b[l]=b^{[l]} = Biases for Z[l]Z^{[l]} b[l]:(n[l],1)b^{[l]}: (n^{[l]}, 1)

0

1

Updated 2021-11-16

Tags

Data Science