logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • Mini-Batch Gradient Descent

    Concept icon
Multiple Choice

Why is the best mini-batch size usually not 1 and not m, but instead something in-between?

0

2

Updated 2020-11-09

Contributors are:

Grace Dwyer
Grace Dwyer
🏆 2

Who are from:

University of Michigan - Ann Arbor
University of Michigan - Ann Arbor
🏆 2

Tags

Data Science

Related
  • An Example of Mini-Batches

    Concept icon
  • Mini-Batch Gradient Descent Algorithm

    Concept icon
  • Batch vs Stochastic vs Mini-Batch Gradient Descent

  • Example Using Mini-Batch Gradient Descent (Learning Rate Decay)

    Concept icon
  • Mini-Batches Size

    Concept icon
  • Which of these statements about mini-batch gradient descent do you agree with?

  • Why is the best mini-batch size usually not 1 and not m, but instead something in-between?

  • Suppose your learning algorithm’s cost J, plotted as a function of the number of iterations, looks like the image below:

  • Stochastic Gradient Descent Algorithm

    Concept icon
  • Loss Gradient over a Mini-batch

logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github