Learn Before
Activity (Process)

Pre-train-then-align Method for LLM Development

The pre-train-then-align method is a two-stage approach for developing Large Language Models. In the initial pre-training stage, the model is trained on vast amounts of data using a next-token prediction objective. Subsequently, in the alignment stage, the model is tuned to adhere to user instructions, intents, and preferences. This alignment phase typically encompasses techniques such as instruction alignment, human preference alignment, and prompting.

Image 0

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.4 Alignment - Foundations of Large Language Models

Related