A team develops a large language model by training it on a vast collection of text from the internet, with the sole objective of making it proficient at predicting the next word in a sequence. They then attempt to use this model directly, without any changes, to categorize customer support emails into 'Billing Issue', 'Technical Problem', or 'Feature Request'. The model performs poorly. Which of the following statements best explains this outcome?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Adaptation of Pre-trained Models via Full Fine-Tuning
Freezing Encoder Parameters During Fine-Tuning
Evaluating the Direct Application of a General Language Model
A team develops a large language model by training it on a vast collection of text from the internet, with the sole objective of making it proficient at predicting the next word in a sequence. They then attempt to use this model directly, without any changes, to categorize customer support emails into 'Billing Issue', 'Technical Problem', or 'Feature Request'. The model performs poorly. Which of the following statements best explains this outcome?
Mismatch Between Pre-training and Downstream Objectives