Learn Before
Concept

Guidance Sources for LLM Alignment

The process of aligning a Large Language Model relies on guidance derived from human preferences. This guidance can be provided through various means, including labeled data, direct human feedback, or other specified forms of preference.

0

1

Updated 2026-04-19

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related