Learn Before
Concept

Dimensions of Large Language Models: Depth and Width

Despite varying implementation details, many Large Language Models (LLMs) share a common foundational Transformer architecture designed for language modeling. These models earn the designation 'large' because they feature significant scale in both their depth (the number of stacked layers or blocks) and their width (the dimensionality of their internal representations).

0

1

Updated 2026-04-19

Contributors are:

Who are from:

Tags

Foundations of Large Language Models

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related