Learn Before
Definition

Definition of LLM Alignment

LLM alignment refers to the process of guiding a Large Language Model to behave in ways that are consistent with human intentions. The guidance for this process can be derived from various sources that reflect human preferences, such as labeled data and direct human feedback.

0

1

Updated 2026-04-20

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences