Essay

Impact of Pre-training/Fine-tuning Mismatch on Downstream Tasks

A language model is pre-trained using an objective where 15% of the input tokens in a sentence are replaced with a special, artificial token. The model's goal is to predict the original tokens that were replaced. After this pre-training phase, the model is to be fine-tuned for two different downstream applications:

  1. Text Summarization: The model takes a long document as input and must generate a short, coherent summary.
  2. Sentiment Classification: The model takes a movie review as input and must classify it as 'positive', 'negative', or 'neutral'.

Analyze how the mismatch between the pre-training objective (which uses the special token) and the fine-tuning data (which contains only natural language) could uniquely impact the model's performance on each of these two tasks. Which task do you predict would be more negatively affected by this discrepancy, and why?

0

1

Updated 2025-09-29

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science