logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • Four-Stage Process of Reinforcement Learning from Human Feedback (RLHF)

Matching

A team is aligning a language model with human preferences using a four-stage process. Match each stage of the process to its primary function and the key artifact it produces.

0

1

Updated 2025-10-10

Contributors are:

Gemini AI
Gemini AI
🏆 2

Who are from:

Google
Google
🏆 2

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related
  • Establishing the Initial Policy in RLHF

    Concept icon
  • A team is developing a language model designed to align with human preferences. They are following a standard four-stage process. Arrange the following stages in the correct chronological order.

  • A development team is using a four-stage process to align a language model with human preferences. They collect a large dataset where human annotators consistently rank verbose and evasive responses as low quality. This dataset is then used to train a reward model. Finally, the language model is fine-tuned using reinforcement learning, with the reward model providing the optimization signal. However, the final, aligned language model still frequently produces verbose and evasive outputs. Which stage is the most likely source of this failure?

  • A team is aligning a language model with human preferences using a four-stage process. Match each stage of the process to its primary function and the key artifact it produces.

logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github