Learn Before
Multiple Choice

A machine learning team is training a large neural network on a massive dataset. To accelerate the process, they employ a strategy where the training data is split across 16 GPUs. Each GPU holds a complete copy of the model and processes its own subset of the data. After each forward and backward pass, the results from all GPUs are combined before updating the model's parameters. The team observes that while using 8 GPUs provided a nearly 8x speed-up compared to a single GPU, scaling to 16 GPUs only resulted in a 10x total speed-up. Based on the principles of the training strategy described, what is the most likely bottleneck causing this diminishing return in performance when scaling from 8 to 16 GPUs?

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related