Learn Before
Classification

Architectural Categories of Pre-trained Transformers

Within Natural Language Processing, pre-trained models based on the Transformer are commonly categorized by their underlying architecture. These primary categories, which are targets for self-supervised pre-training approaches, include encoder-only, decoder-only, and encoder-decoder structures.

0

1

Updated 2026-04-14

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related