Case Study

Analyzing Model Training with Flawed Data

A team is training a language model for a customer support chatbot. They are using a standard supervised learning approach where the model's goal is to maximize the probability of generating the exact responses found in their training dataset. The dataset consists of thousands of real chat logs between human agents and customers. However, the team discovers that in about 5% of these logs, the human agent provided a factually incorrect or unhelpful answer.

Analyze how the training objective will likely affect the model's responses when it is deployed and encounters questions similar to those that were answered incorrectly in the training data. Explain the underlying mechanism of the training objective that leads to this outcome.

0

1

Updated 2025-10-04

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science