Learn Before
Critiquing a Text-to-Text Training Sample
A machine learning engineer is preparing a dataset to train a model on a sentiment analysis task. They create the following training sample: "The film was a masterpiece. ā positive". Based on the principles of a unified, prefix-based text-to-text framework, identify the primary weakness of this sample format and explain how you would correct it to improve the model's ability to handle multiple, distinct tasks.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of a T5 Machine Translation Training Sample with Special Tokens
Example of a T5 Question-Answering Sample
Example of a T5 Simplification Task Sample
Differentiating Encoder and Decoder Sequences with Start Symbols
Versatility of the T5 Text-to-Text Format
Definition of c_gold
Formula for Input Embedding Composition
A researcher wants to train a model to perform a new task: converting a sentence from passive voice to active voice. Given the passive input sentence 'The cake was eaten by the dog' and the desired active output 'The dog ate the cake', which of the following training samples is correctly structured according to the unified, prefix-based text-to-text format?
Critiquing a Text-to-Text Training Sample
A single text-to-text model is being trained on a dataset containing samples for four different tasks. Each sample's input begins with a prefix that instructs the model on what to do. Match each input sample (Source Text) with the most likely task it is intended for.