Learn Before
Comparability of Pre-training Tasks
Despite their differences, various pre-training tasks can be evaluated and compared against one another by utilizing a consistent framework and a standardized experimental setup.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Computing Sciences
Foundations of Large Language Models Course
Related
Contrastive Learning (CTL)
Extensions of PTMs
Applying and Adapting Pre-trained Models to Downstream Tasks
Unsupervised Pre-training
Supervised Pre-training
Self-Supervised Learning
Comparison of Pre-training Paradigms
Rationale for Categorizing Pre-training Tasks by Objective
Denoising Autoencoding
Comparability of Pre-training Tasks
Generality of Pre-training Tasks and Performance
Applying Pre-trained Models to Downstream Tasks
Identifying a Pre-training Strategy
Breadth of Pre-training Tasks
A research team is developing a new language model and is considering different pre-training approaches. Match each pre-training scenario below with the correct category of learning it represents.
A language model is being trained on a large corpus of text from the internet. The training process involves randomly hiding 15% of the words in each sentence and then tasking the model with predicting the original identity of these hidden words based on the surrounding context. Which category of pre-training task does this scenario best exemplify, and why?
Comparing Pre-training Task Categories
Comparison of Pre-training Tasks
Learn After
Evaluating a Research Claim on Pre-training Tasks
A research team aims to compare the effectiveness of two different self-supervised pre-training objectives: 'next-token prediction' and 'masked language modeling'. To obtain a valid and reliable conclusion about which objective produces a better language representation, which of the following is the most crucial aspect of their experimental design?
Critique of a Comparative Study