Learn Before
Classification via Cloze Task Reframing
A method for performing classification with Large Language Models is to reframe the task as a cloze task, or a 'fill-in-the-blank' problem. This involves structuring a prompt so that the model's task is to complete the text by generating the most appropriate word. In an ideal scenario, the word generated by the model would be one of the predefined class labels, such as 'positive,' 'negative,' or 'neutral'.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Basic Workflow of Prompt
Prompt Decomposition
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Example of a Complete Prompt for Machine Translation
Importance of Prompting for Response Quality
Prompting as a Conditional Probability Task
Constraining LLM Predictions to a Predefined Label Set
Prompt Ensembling
Structural Components of a Simple Prompt
Input Embeddings in LLMs
Input Token Sequence in Language Models
Varied Usage of the Term 'Prompt' in Literature
Definition of Prompting
A user provides the following text to a language model: 'Summarize the key points of the following article in three bullet points. Article: [Text of a long article follows here...]'. The model then generates a three-point summary. Based on the formal definition of how these models process information, which of the following best describes the 'prompt' in this interaction?
Analyzing the Components of a Model Input
Classification via Cloze Task Reframing
A language model is given the input text, 'Translate the following sentence to French: The cat is on the mat.' The model's objective is to generate the most likely sequence of words that completes this task. According to the formal, probabilistic definition of how these models operate, what is the fundamental role of the input text?
Learn After
Constraining LLM Predictions to a Predefined Label Set
A developer needs to use a text-generation model to classify customer reviews as 'Positive', 'Negative', or 'Neutral'. The developer decides to reframe this task as a 'fill-in-the-blank' problem, creating a prompt where the model's most probable completion is one of the target labels. Given the customer review, 'The battery life on this phone is surprisingly short.', which of the following prompts is the best application of this reframing technique?
Analyzing Unexpected Model Behavior in Cloze Task Classification
Limitation of Cloze Task Reframing