Multiple Choice

A team fine-tunes a large, pre-trained language model, known for its strong general knowledge, on a highly specialized dataset of legal contracts. They train the model for a very large number of iterations. After fine-tuning, the model demonstrates exceptional performance in generating and interpreting legal text but now provides nonsensical or incorrect answers to simple, general knowledge questions it could easily answer before. What is the most likely explanation for this change in the model's behavior?

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science