Learn Before
Analyzing a Sub-Layer Implementation
An engineer is implementing a sub-layer for a neural network based on the formula: output = LNorm(F(input) + input). They have designed the data to flow in the following sequence of operations:
- The
inputtensor is processed by a functionF. - The result,
F(input), is immediately passed through a Layer Normalization (LNorm) operation. - The original
inputtensor is added to the normalized result from step 2 to produce the finaloutput.
Analyze this implementation. Does it correctly follow the given formula? Justify your reasoning by comparing the sequence of operations in the implementation to the sequence dictated by the formula.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A sub-layer in a neural network processes an input tensor using a specific architectural pattern. The process involves three key operations: 1) applying the sub-layer's primary function (e.g., self-attention), 2) applying a normalization function, and 3) adding the original input tensor to the result of the primary function (a residual connection). Arrange these three operations in the correct sequence that corresponds to the formula:
output = LNorm(F(input) + input).Analyzing a Sub-Layer Implementation
A developer is implementing a sub-layer (e.g., self-attention) within a Transformer block. They need to apply the sub-layer's function
F, a residual connection (adding the originalinput), and a layer normalizationLNormoperation. Which of the following expressions correctly represents the post-norm architectural pattern?