Architecture and Function of the RLHF Reward Model
In Reinforcement Learning from Human Feedback (RLHF), the reward model evaluates a concatenated sequence of an input prompt and an output . Using a pre-trained Large Language Model (specifically a Transformer decoder) as the base, the model extracts the representation at the last position, denoted as , to represent the full semantic content of the sequence. This -dimensional vector is then mapped to a scalar reward score via a linear transformation: , where is a linear mapping matrix. The score measures how well the output aligns with the desired behavior.
0
1
References
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.4 Alignment - Foundations of Large Language Models
Related
Policy Learning in RLHF
Dual Role of the RLHF Reward Model: Ranking-based Training for Scoring Application
Relation between Verifiers and RLHF Reward Models
General Loss Minimization Objective for Reward Model Training
Architecture and Function of the RLHF Reward Model
Reward Model Training as a Ranking Problem in RLHF
Underdetermined Model
Limitations of Outcome-Based Rewards for Entire Sequences
Training a Reward Model with Preference Data
Converting Listwise Rankings to Pairwise Preferences for Reward Model Training
Diagnosing Undesired Model Behavior
An AI team is training a reward model using a dataset where, for each prompt, human annotators have ranked several generated responses from best to worst. What is the fundamental task the reward model is being trained to perform based on this specific type of data?
An AI development team is training a model to act as a helpful assistant. They create a dataset where, for each user prompt, human evaluators are shown two different generated responses and asked to choose which one is better. The model is then trained on this dataset of pairwise preferences. After training, the team observes that the model consistently assigns higher scores to longer, more detailed responses, even when they are less helpful or contain irrelevant information. Which of the following is the most likely explanation for this emergent behavior?
Ranking LLM Outputs as an Alternative to Rating
Regularization in RLHF Reward Model Training
Complexity of Reward Model Training in RLHF
Architecture and Function of the RLHF Value Model
Target Model (Policy Model) in RLHF
Reference Policy Definition in RLHF
Architecture and Function of the RLHF Reward Model
A development team is building a system to align a large language model using reinforcement learning from human feedback. Their setup includes a target model for text generation, a reference model, a reward model to score outputs based on human preferences, and a value model to predict future rewards. For computational efficiency, they decide to build the reward model using a Convolutional Neural Network (CNN) and the value model using a Recurrent Neural Network (RNN), while keeping the target and reference models as Transformer decoders. What is the most significant architectural inconsistency in this design compared to a standard implementation?
LLM as the Agent in RLHF
An alignment process for a large language model uses a system composed of four distinct models, all sharing a common underlying architecture. Match each model component with its primary role in this system.
Architectural Consistency in Feedback-Based LLM Alignment
In a typical system for aligning a language model with human feedback, it is common practice to use a Transformer-based architecture for the text-generating models, while employing simpler, non-Transformer architectures for the reward and value models to reduce computational overhead.
Learn After
Reward Model Implementation using a Pre-trained LLM
Troubleshooting a Reward Model's Architecture
Both a standard generative language model and an RLHF reward model are often based on the same core architecture (e.g., a Transformer decoder). What is the key architectural modification that allows the reward model to produce a single scalar quality score for a given text, rather than generating a new sequence of text?
Adapting a Language Model for Reward Prediction
Function and Inputs of the RLHF Reward Model
Sequence-Level Evaluation in Reward Models