logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • (Batch) Gradient Descent (Deep Learning Optimization Algorithm)

    Concept icon
Multiple Choice

For logistic regression, the gradient is given by ∂∂θjJ(θ)=1m∑mi=1(hθ(x(i))−y(i))x(i)j. Which of these is a correct gradient descent update for logistic regression with a learning rate of α?

Check all that apply.

0

1

Updated 2021-11-11

Contributors are:

Grace Dwyer
Grace Dwyer
🏆 3

Who are from:

University of Michigan - Ann Arbor
University of Michigan - Ann Arbor
🏆 3

References


  • Machine Learning Yearning (Deeplearning.ai)

Tags

Data Science

Related
  • Logistic regression gradient descent

    Concept icon
  • Derivation of the Gradient Descent Formula

    Concept icon
  • Mini-Batch Gradient Descent

    Concept icon
  • Epoch in Gradient Descent

    Concept icon
  • Batch vs Stochastic vs Mini-Batch Gradient Descent

  • Gradient Descent with Momentum

    Concept icon
  • For logistic regression, the gradient is given by ∂∂θjJ(θ)=1m∑mi=1(hθ(x(i))−y(i))x(i)j. Which of these is a correct gradient descent update for logistic regression with a learning rate of α?

  • Suppose you have the following training set, and fit a logistic regression classifier hθ(x)=g(θ0+θ1x1+θ2x2)h\theta(x)=g(\theta0+\theta1x1+\theta2x2)hθ(x)=g(θ0+θ1x1+θ2x2).

  • Backpropagation

    Concept icon
logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github