Comparing Multi-Sentence Corruption Techniques
A key pre-training strategy for models designed to understand long documents involves corrupting the input text. Consider two distinct corruption methods: (1) randomly shuffling the order of complete sentences within a document, and (2) replacing a span of text, which could include parts of sentences or multiple full sentences, with a single special token. Analyze how these two methods would likely differ in the types of linguistic understanding they encourage the model to develop.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Selecting a Training Method for a Summarization Model
When training a model on a document with multiple sentences, what is the primary advantage of corrupting the input by randomly shuffling the order of entire sentences, as opposed to simply reordering individual tokens across the entire document?
Comparing Multi-Sentence Corruption Techniques