Case Study

Optimizing Pipelined Training Throughput

A machine learning team is training a large model using a pipelined system across multiple processors. They conduct three experiments to find the optimal configuration, keeping the total data batch size the same but changing how many smaller chunks (micro-batches) it is divided into. Analyze the results below and explain the performance change observed between the experiments.

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science