Essay

Choosing Pre-Norm vs Post-Norm for a Deep Transformer: Stability, Shapes, and Sub-layer Semantics

You are reviewing two candidate implementations of a Transformer block for an internal LLM that must be scaled from 12 to 96 layers without changing the model dimension d. Each block has (1) a multi-head self-attention sub-layer that maps an input H ∈ R^{m×d} to an output in R^{m×d} by running multiple attention heads in parallel, concatenating their outputs, and applying a final linear projection, and (2) a position-wise two-layer FFN applied independently to each token: FFN(h)=σ(hW_h+b_h)W_f+b_f with W_h ∈ R^{d×d_h} and W_f ∈ R^{d_h×d}. Both designs use residual connections and layer normalization (LN), where LN normalizes each token’s d features using that token’s mean and standard deviation and then applies learnable gain and bias.

Design A (post-norm) uses, for each sub-layer: y = LN(x + F(x)). Design B (pre-norm as defined here) uses, for each sub-layer: y = LN(F(x)) + x.

In early training runs at 96 layers, Design A frequently diverges (loss becomes NaN) while Design B trains but shows slightly slower early loss reduction.

Write an engineering recommendation memo (as an essay) that: (a) argues which design you would choose for the 96-layer model and why, explicitly linking your reasoning to how LN placement interacts with residual connections across many stacked blocks; (b) demonstrates that you understand the required tensor shapes through the attention and FFN sub-layers (i.e., why both F(x) terms can be added to x and why the FFN must use W_h and W_f with the given dimensions); and (c) explains one plausible tradeoff your choice introduces for model behavior or optimization (e.g., gradient flow, representational scaling, or sensitivity to initialization), grounded in the two formulas above rather than generic statements.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Data Science

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models

Transformer

Related