Learn Before
Concept

Inference-Time LLM Alignment

Inference-time alignment is an approach that guides a Large Language Model's behavior as it generates output, rather than altering its parameters through training or fine-tuning. This method avoids the need for additional training by applying alignment techniques when the model is in use. Key techniques include prompting, which dynamically adapts the model to various tasks at minimal cost, and rescoring, where a model's outputs are evaluated and prioritized based on a scoring system, similar to a reward model, that simulates human preferences.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Related