Learn Before
Analysis of LLM Alignment
Consider the behaviors of two different language models described in the case study. Analyze which model demonstrates better alignment with human intentions and justify your reasoning by explaining how their responses differ in relation to the core principles of this concept.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A research lab has developed a large language model that is highly capable of generating human-like text. However, during testing, they find it frequently produces outputs that are unhelpful, factually inaccurate, or contrary to basic ethical principles. To address this, the lab initiates a new phase of training that specifically uses human preferences and feedback to steer the model's outputs towards being more helpful, honest, and harmless. What is the primary goal of this new training phase?
Classification of Instruction Fine-Tuning as an Alignment Problem
Evaluating Model Training Objectives
Example of Misalignment in Instruction-Following
Challenges in Defining Human Preferences for LLM Alignment
Analysis of LLM Alignment