Learn Before
Versatility of the T5 Text-to-Text Format
The T5 text-to-text framework is not confined to a single application, such as machine translation. Its 'Source Text → Target Text' structure can be generalized to represent a wide array of other natural language processing tasks, which are specified using task-specific prefixes.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Example of a T5 Machine Translation Training Sample with Special Tokens
Example of a T5 Question-Answering Sample
Example of a T5 Simplification Task Sample
Differentiating Encoder and Decoder Sequences with Start Symbols
Versatility of the T5 Text-to-Text Format
Definition of c_gold
Formula for Input Embedding Composition
A researcher wants to train a model to perform a new task: converting a sentence from passive voice to active voice. Given the passive input sentence 'The cake was eaten by the dog' and the desired active output 'The dog ate the cake', which of the following training samples is correctly structured according to the unified, prefix-based text-to-text format?
Critiquing a Text-to-Text Training Sample
A single text-to-text model is being trained on a dataset containing samples for four different tasks. Each sample's input begins with a prefix that instructs the model on what to do. Match each input sample (Source Text) with the most likely task it is intended for.
Learn After
Example of a T5 Translation Scoring Sample
A developer wants to adapt a pre-trained model that uses a unified text-to-text framework for a new task: sentiment analysis. The goal is for the model to read a customer review and output one of three labels: 'positive', 'negative', or 'neutral'. Which of the following represents the best-formatted source text to send to the model to classify the review 'The battery life is incredible!'?
Designing a New NLP Task Format
A research team is exploring the capabilities of a model that uses a unified 'text-to-text' framework. The model processes an input string (which includes a prefix indicating the task) and generates an output string. Match each of the following natural language processing tasks with the most appropriate input/output format that represents it within this framework.