logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • Formulation of a Feedforward Neural Networks

    Concept icon
  • Forward Propagation

    Concept icon
Concept icon
Concept

Forward Propagation Formulation

Input: A[l−1]A^{[l-1]}A[l−1] Output: A[l]A^{[l]}A[l] Cache: Z[l],A[l−1]Z^{[l]}, A^{[l-1]}Z[l],A[l−1] Z[l]=W[l]A[l−1]+b[l]Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}Z[l]=W[l]A[l−1]+b[l] A[l]=g[l](Z[l])A^{[l]} = g^{[l]}(Z^{[l]})A[l]=g[l](Z[l])

0

1

Concept icon
Updated 2021-10-17

Contributors are:

Iman YeckehZaare
Iman YeckehZaare
🏆 3

Who are from:

University of Michigan - Ann Arbor
University of Michigan - Ann Arbor
🏆 3

Tags

Data Science

Related
  • Forward Propagation Formulation

    Concept icon
  • Backward Propagation Formulation

    Concept icon
  • Dimension of weight matrix

  • Connection between the Layers of Neural Network

    Concept icon
  • Forward Propagation Formulation

    Concept icon
  • True/False: During forward propagation, in the forward function for a layer ll you need to know what is the activation function in a layer (Sigmoid, tanh, ReLU, etc.). During back propagation, the corresponding backward function also needs to know what is the activation function for layer ll, since the gradient depends on it.

  • Which of these is a correct vectorized implementation of forward propagation for layer ℓ\ellℓ, where 1≤ℓ\ellℓ≤L\mathcal{L}L?

    • Z[ℓ]^{[\ell]}[ℓ] = W([ℓ])^([\ell])([ℓ])A([ℓ])^([\ell])([ℓ]) + b([ℓ])^([\ell])([ℓ])
Learn After
  • Importance of Activation functions

    Concept icon
logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github