Learn Before
Analyzing Efficiency in a Distributed Model
A team is training a large, 12-layer neural network on 3 GPUs. They partition the model by assigning layers 1-4 to GPU 1, layers 5-8 to GPU 2, and layers 9-12 to GPU 3. Analyze the computational workflow for a single data batch during the forward pass. Specifically, describe the activity state (active or idle) of GPU 1 and GPU 3 while GPU 2 is processing its layers. What is a primary drawback of this distribution method regarding hardware utilization?
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Process Flow in Layer-wise Model Parallelism
Example of Model Parallelism with a Transformer Decoder
Worker Idle Time in Layer-wise Model Parallelism
An engineer is tasked with training a very large neural network composed of 24 sequential layers. The model is too large to fit into the memory of a single processing device. To solve this, the engineer decides to distribute the model across 4 identical devices by partitioning it based on its layers. Which of the following strategies correctly applies this layer-based distribution method?
Analyzing Efficiency in a Distributed Model
Consider a large neural network with 12 sequential layers that needs to be distributed across 3 processing devices because it is too large for a single device. An engineer proposes the following distribution: Device 1 runs layers 1, 4, 7, 10; Device 2 runs layers 2, 5, 8, 11; and Device 3 runs layers 3, 6, 9, 12. This proposed method represents a correct implementation of a layer-based partitioning strategy where groups of consecutive layers are assigned to different devices.