Case Study

Production Bug Triage: Transformer Block Norm Placement vs Attention/FFN Interface Contracts

You are reviewing a teammate’s Transformer block implementation for an internal LLM service. The model uses hidden size d=1024, sequence length m=256, number of attention heads h=16 (so per-head dimension is 64), and FFN hidden size d_h=4096 with ReLU. The teammate reports that training becomes unstable (loss spikes and occasional NaNs) after a refactor that was intended to be “behavior-preserving.” They provide the following pseudocode for one block:

Input: H (shape m×d)

  1. A = MultiHeadSelfAttention(LNorm(H))
  2. H1 = LNorm(H + A)
  3. F = FFN(LNorm(H1))
  4. H2 = H1 + LNorm(F) Output: H2

Assume MultiHeadSelfAttention follows the standard pattern: for each head j, Q[j]=X Wq[j], K[j]=X Wk[j], V[j]=X Wv[j], scaled dot-product attention is computed per head, head outputs are concatenated, then projected back to d.

Case task: Identify whether this block is consistently implementing a pre-norm scheme, a post-norm scheme, or an inconsistent mixture, and explain (a) the most likely stability-related issue caused by the mixture in terms of residual-path “cleanliness” and normalization placement, and (b) one concrete corrected block formula (in equations or pseudocode) that makes the normalization placement consistent while keeping all tensor dimensions valid for both the attention sub-layer and the FFN sub-layer.

Image 0

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Data Science

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models

Transformer

Related