Learn Before
A research team is pre-training an encoder-decoder model specifically for the task of correcting complex grammatical errors and improving sentence structure in user-generated text. The team wants to select a pre-training objective that will best prepare the model for this downstream task. Which of the following input corruption strategies is most likely to be effective, and why?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Selecting a Pre-training Strategy for a Summarization Model
A research team is pre-training an encoder-decoder model specifically for the task of correcting complex grammatical errors and improving sentence structure in user-generated text. The team wants to select a pre-training objective that will best prepare the model for this downstream task. Which of the following input corruption strategies is most likely to be effective, and why?
Designing an Experiment to Select a Pre-training Objective