Multiple Choice

Both a standard generative language model and an RLHF reward model are often based on the same core architecture (e.g., a Transformer decoder). What is the key architectural modification that allows the reward model to produce a single scalar quality score for a given text, rather than generating a new sequence of text?

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.4 Alignment - Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science