Case Study

Post-Norm vs Pre-Norm Migration: Verifying Tensor Shapes and Correct Sub-layer Wiring

You are reviewing a teammate’s pull request that “converts a Transformer block to pre-norm for better stability” in an internal LLM used for document triage. The model uses representation size d=512, FFN hidden size d_h=2048, sequence length m=128, and multi-head self-attention with n_head=8 (so each head uses d_head=64). The PR includes the following pseudocode for one block (self-attention sub-layer then FFN sub-layer):

  1. a = LN(x)
  2. attn_out = MultiHeadSelfAttention(a) # returns shape (m, 512)
  3. y = LN(attn_out + x)
  4. f = LN(y)
  5. ffn_out = ReLU(f * W_h + b_h) * W_f + b_f
  6. out = LN(ffn_out + y)

The author claims this is “pre-norm” because LN is applied before each function. During training, you still see instability and slower convergence than expected.

As the reviewer, identify whether this block is actually pre-norm, post-norm, or a hybrid, and explain (a) the minimal change(s) needed to make it a true pre-norm block for both sub-layers, and (b) the required dimensions of W_h and W_f so that the FFN preserves the (m, d) interface expected by the residual connections. Your answer must explicitly reference how residual connections, layer normalization placement, and the attention/FFN output shapes interact in this block.

Image 0

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Data Science

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models

Transformer

Related