Evaluating a Pre-training Strategy for a Code Generation Model
Evaluate the team's pre-training strategy. What is a major potential weakness of this approach given their goal, and how could the strategy be fundamentally improved to better achieve it?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A research team aims to pre-train a language model to be highly robust against a wide variety of real-world text errors, including typos, missing words, and jumbled phrases. Which of the following input corruption strategies during pre-training is most likely to achieve this goal of general robustness?
Rationale for Mixed Corruption Strategies in Pre-training
Evaluating a Pre-training Strategy for a Code Generation Model