Impact of Pre-training/Fine-tuning Mismatch on Downstream Tasks
A language model is pre-trained using an objective where 15% of the input tokens in a sentence are replaced with a special, artificial token. The model's goal is to predict the original tokens that were replaced. After this pre-training phase, the model is to be fine-tuned for two different downstream applications:
- Text Summarization: The model takes a long document as input and must generate a short, coherent summary.
- Sentiment Classification: The model takes a movie review as input and must classify it as 'positive', 'negative', or 'neutral'.
Analyze how the mismatch between the pre-training objective (which uses the special token) and the fine-tuning data (which contains only natural language) could uniquely impact the model's performance on each of these two tasks. Which task do you predict would be more negatively affected by this discrepancy, and why?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Impact of Pre-training/Fine-tuning Mismatch on Downstream Tasks
A language model is first trained on a large text corpus where some words in each sentence are replaced with a special
[MASK]symbol, and the model's goal is to predict the original words. Subsequently, this pre-trained model is adapted for a specific task, such as sentiment analysis, using a new dataset of complete, un-masked sentences. Which of the following statements best analyzes the primary architectural conflict that arises between these two phases?Troubleshooting a Pre-trained Model's Performance
Permuted Language Modeling (PLM)