Learn Before
Essay

Designing a Distributed Training Plan Under Memory, Throughput, and Stability Constraints

You are the tech lead for training a new LLM that cannot fit on a single GPU due to parameter/activation memory, but leadership also expects near-linear throughput scaling when moving from 8 to 32 GPUs. Your cluster has 32 identical GPUs connected with high-bandwidth intra-node links and slower inter-node links. You must choose a distributed training approach that combines (as needed) data parallelism, model parallelism, pipeline parallelism (with micro-batching), and mixed precision training.

Write a recommendation memo that proposes a concrete parallelism/mixed-precision strategy and justifies it. Your memo must: (1) explain how your design resolves the single-GPU out-of-memory issue, (2) explain where and why gradient synchronization/communication happens and how it affects scaling, (3) explain how pipeline micro-batching changes device utilization compared with naive layer-splitting, and (4) explain how mixed precision improves speed/memory while still keeping training numerically stable (e.g., what stays in higher precision and why). Conclude by identifying the most likely bottleneck that will prevent perfect 32x scaling in your design and one mitigation you would try first.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related