Inefficiency of Annotating Obvious Errors
In the context of annotating reasoning steps for training language models, focusing on identifying and labeling obvious errors is often an inefficient strategy. Such annotations typically provide a low-quality signal and do not contribute significantly to improving the model's complex reasoning abilities.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Learn After
A team is training a language model to solve complex, multi-step word problems by having human annotators review and correct the model's step-by-step reasoning. Given a limited budget for annotation, which of the following strategies would be the most effective for improving the model's core reasoning abilities?
Evaluating an Annotation Strategy for an AI Tutor
Critique of an Annotation Strategy