Learn Before
Adaptation Effort in Unsupervised Pre-training
While unsupervised pre-training on large-scale unlabeled data serves as a beneficial preliminary step to establish a good starting point for optimization, a significant limitation is that it still demands considerable effort to further train the model using task-specific labeled data afterward.
0
1
Tags
Foundations of Large Language Models
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Benefits of Unsupervised Pre-training
Initial Language Model Training Strategy
A research team is developing a large neural network for various language tasks. In the initial training phase, they use a vast dataset of unlabeled text from the internet. The model's objective is not tied to any specific end-user application (like translation or sentiment classification), but rather to learn the underlying structure and statistical patterns of the language itself. What is the fundamental purpose of this initial training approach?
A research team is training a large neural network on a massive dataset of unlabeled text from the web. The training objective is to predict a masked word within a sentence based on its surrounding context. No task-specific labels, such as sentiment scores or document categories, are provided during this stage. What is the primary goal of this training methodology?
Adaptation Effort in Unsupervised Pre-training