For a task where a model learns to predict if one sentence logically follows another, a negative training example is formatted as [CLS] Sentence A . [SEP] Sentence B . [SEP]. In this format, the [SEP] token's specific function is to explicitly label the sentence pair as unrelated.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A language model is being trained to perform a classification task to determine if two sentences are logically connected. Which of the following inputs would serve as the most effective negative example for this training process?
Constructing a Negative Training Example
For a task where a model learns to predict if one sentence logically follows another, a negative training example is formatted as
[CLS] Sentence A . [SEP] Sentence B . [SEP]. In this format, the[SEP]token's specific function is to explicitly label the sentence pair as unrelated.