Troubleshooting a Model Adaptation Pipeline
A machine learning engineer pre-trains a large model on a massive text corpus. The pre-training objective is to predict masked words, so the model's final output layer is designed to produce a probability distribution over a 50,000-word vocabulary. After pre-training is complete, the engineer attempts to adapt the model for a new task: classifying customer reviews into one of three categories (positive, negative, or neutral). They connect the entire pre-trained model to the new dataset and begin fine-tuning, but find that the model's performance is extremely poor and fails to improve. What is the most likely architectural error in this adaptation process, and why does it prevent the model from learning the new task effectively?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Troubleshooting a Model Adaptation Pipeline
A machine learning engineer has successfully pre-trained a large language model on a massive text corpus with the objective of predicting the next word in a sequence. To adapt this model for a new task of classifying customer reviews as 'positive', 'negative', or 'neutral', the engineer's first step is to remove the model's final output layer. What is the most accurate justification for this action?
Rationale for Modifying a Pre-trained Model