Learn Before
Comparing Input Alteration Techniques
Consider two different ways to alter an input sentence for a model that must learn to reconstruct the original text. Method A replaces some words with a special placeholder. Method B keeps all the original words but shuffles their order. Analyze how the primary skills learned by the model would likely differ when trained with Method A versus Method B.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An engineer is training a model whose task is to reconstruct an original sentence from a modified version of it. The engineer's primary goal is to force the model to learn the semantic meaning of the sentence, independent of the specific ordering of its words. Which of the following modification techniques, when applied to the input sentence, would be most effective for achieving this specific training objective?
Comparing Input Alteration Techniques
Evaluating a Training Strategy for a Summarization Model
Example of Token Reordering in Denoising Autoencoding