Learn Before
Problem

The Alignment Problem in LLMs

The fundamental alignment problem is that the pre-training process for Large Language Models can itself introduce a variety of issues, leading to a discrepancy between the model's output and a user's intended goals. This misalignment occurs because pre-trained models, by default, may not have learned to follow instructions or adhere to implicit human values.

0

1

Updated 2025-10-07

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences