Learn Before
  • Formulation of a Feedforward Neural Networks

  • Forward Propagation

Forward Propagation Formulation

Input: A[l1]A^{[l-1]} Output: A[l]A^{[l]} Cache: Z[l],A[l1]Z^{[l]}, A^{[l-1]} Z[l]=W[l]A[l1]+b[l]Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]} A[l]=g[l](Z[l])A^{[l]} = g^{[l]}(Z^{[l]})

0

1

4 years ago

Tags

Data Science

Related
  • Forward Propagation Formulation

  • Backward Propagation Formulation

  • Dimension of weight matrix

  • Connection between the Layers of Neural Network

  • Forward Propagation Formulation

  • True/False: During forward propagation, in the forward function for a layer ll you need to know what is the activation function in a layer (Sigmoid, tanh, ReLU, etc.). During back propagation, the corresponding backward function also needs to know what is the activation function for layer ll, since the gradient depends on it.

  • Which of these is a correct vectorized implementation of forward propagation for layer \ell, where 1≤\ellL\mathcal{L}?

    • Z[]^{[\ell]} = W([])^([\ell])A([])^([\ell]) + b([])^([\ell])
Learn After
  • Importance of Activation functions