Essay

Root-Cause Analysis of Training Instability After a “Minor” Transformer Block Change

You inherit a production LLM codebase where a teammate made a “minor cleanup” to the Transformer block. After the change, training becomes unstable (loss spikes and occasional NaNs) only when scaling from 12 to 48 layers; the 12-layer model still trains. The teammate claims they only (a) moved LayerNorm, and (b) refactored the attention and FFN code for readability.

Assume the model uses token representations H ∈ R^{m×d}. The self-attention sub-layer is multi-head self-attention: for each head j, Q^{[j]} = H W_j^q, K^{[j]} = H W_j^k, V^{[j]} = H W_j^v; each head output is computed via scaled dot-product attention, head outputs are concatenated, then projected back to dimension d. The FFN is two linear layers with a nonlinearity: FFN(h) = σ(h W_h + b_h) W_f + b_f, where W_h ∈ R^{d×d_h} and W_f ∈ R^{d_h×d}. LayerNorm normalizes each token’s d features and has learnable gain/bias.

Write an engineering memo that (1) proposes the two most plausible implementation mistakes that could simultaneously explain “works at 12 layers but diverges at 48 layers” and are consistent with the teammate’s description, and (2) for each mistake, explains the mechanism of failure by explicitly connecting (i) residual + LayerNorm placement (pre-norm vs post-norm), (ii) how multi-head attention and FFN preserve/return to dimension d, and (iii) why depth amplifies the issue. Conclude with a concrete, minimal patch (in words or pseudocode) that would fix each mistake and a quick sanity-check you would run to confirm the fix.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Data Science

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models

Transformer

Related