Multiple Choice

A research team is pre-training an encoder-decoder model specifically for the task of correcting complex grammatical errors and improving sentence structure in user-generated text. The team wants to select a pre-training objective that will best prepare the model for this downstream task. Which of the following input corruption strategies is most likely to be effective, and why?

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Evaluation in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science