Classification

Fundamental Approaches to LLM Alignment

Two of the most widely-used and foundational approaches for aligning Large Language Models are instruction alignment and human preference alignment. Instruction alignment typically employs supervised fine-tuning techniques to teach the model to follow user instructions, while human preference alignment often uses reinforcement learning techniques based on human feedback.

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.4 Alignment - Foundations of Large Language Models

Ch.2 Generative Models - Foundations of Large Language Models

Related