A machine learning engineer has successfully pre-trained a large language model on a massive text corpus with the objective of predicting the next word in a sequence. To adapt this model for a new task of classifying customer reviews as 'positive', 'negative', or 'neutral', the engineer's first step is to remove the model's final output layer. What is the most accurate justification for this action?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Troubleshooting a Model Adaptation Pipeline
A machine learning engineer has successfully pre-trained a large language model on a massive text corpus with the objective of predicting the next word in a sequence. To adapt this model for a new task of classifying customer reviews as 'positive', 'negative', or 'neutral', the engineer's first step is to remove the model's final output layer. What is the most accurate justification for this action?
Rationale for Modifying a Pre-trained Model