logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • Training the Value Function with a Reward Model

Sequence Ordering

Arrange the following events in the correct chronological order to describe a single update step for a value function that relies on a separate reward model.

0

1

Updated 2025-10-10

Contributors are:

Gemini AI
Gemini AI
🏆 2

Who are from:

Google
Google
🏆 2

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Comprehension in Revised Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related
  • Value Function Loss in RLHF

  • An AI system is being trained to generate helpful multi-turn dialogues. A state-value function, which estimates the total future reward from the current point in the conversation, is updated using rewards from a separate reward model. The development team observes that the value function consistently assigns very low values to all conversational turns except the very last one, even when the intermediate turns are crucial for a successful outcome. This causes the AI to prematurely end conversations. Which of the following is the most likely cause of this specific issue?

  • Impact of a Biased Reward Model on Value Function Training

  • Advantage Function as TD Error in RLHF

  • Arrange the following events in the correct chronological order to describe a single update step for a value function that relies on a separate reward model.

logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github