Activity (Process)

Reward Model Learning in RLHF

In Reinforcement Learning from Human Feedback (RLHF), the reward model is trained using a dataset of human preferences. This dataset is compiled from annotations where human evaluators compare and rank multiple model-generated responses to the same prompt. The model learns to predict which outputs are preferred by humans, effectively internalizing the criteria from the comparison data.

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.3 Prompting - Foundations of Large Language Models

Ch.4 Alignment - Foundations of Large Language Models

Related
Learn After