Example of a Few-Shot Prompt for Polarity Classification
A concrete example of a few-shot prompt for polarity classification involves providing a large language model with an instruction and several demonstrations before asking it to classify a new input. Such a prompt begins with a task description, like 'Assume that the polarity of a text is a label chosen from {positive, negative, neutral}. Identify the polarity of the input.' This is followed by demonstrations pairing inputs with their correct labels, such as 'Input: The traffic is terrible during rush hours, making it difficult to reach the airport on time. Polarity: Negative' and 'Input: The weather here is wonderful. Polarity: Positive'. Finally, a new input is provided for the model to classify, for instance, 'Input: I love the food here. It's amazing! Polarity:'.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.1 Pre-training - Foundations of Large Language Models
Related
Example of a Prompt for Calculating the Average of 1, 3, 5, and 7
Insufficiency of Simple Demonstrations for LLM Reasoning Tasks
A user wants a language model to extract the main sentiment from a sentence and express it as a single emoji. The initial prompt, 'What is the sentiment of this sentence, as an emoji: The movie was a spectacular triumph!', results in the model responding with text like 'The sentiment is positive.' Which of the following revised prompts is structured to be the most effective at teaching the model the desired task?
Improving LLM Classification Consistency
Crafting an Effective Prompt with Demonstrations
Example of a Few-Shot Prompt for Polarity Classification
Examples of Few-Shot Learning Applications in NLP
Enabling Few-Shot Learning with Multiple Demonstrations
Input-Output Patterns in Few-Shot Learning
Sufficiency of Demonstrations in Few-Shot Learning
Applying Few-Shot Learning to Complex Reasoning Tasks
A user provides the following text to a large language model to get it to classify movie reviews:
Review: The plot was predictable and the acting was wooden. I was bored the entire time. Sentiment: Negative
Review: An absolute masterpiece! The cinematography was stunning and the story was deeply moving. Sentiment: Positive
Review: It was a decent film. Not the best I've seen this year, but it had some good moments. Sentiment: Neutral
Review: I couldn't stop laughing from beginning to end. A brilliant comedy. Sentiment:
The model correctly responds with "Positive". Which statement best analyzes the primary reason for the model's successful performance on this task?
Constructing a Few-Shot Prompt for a Novel Task
Critiquing a Prompt for a Custom Extraction Task
Example of a Few-Shot Prompt for Polarity Classification
Examples of Instruction-based Prompts for Polarity Classification
Example of a Label Set in Polarity Classification
Definition of Neutral Sentiment in Polarity Classification
Example of a Complete Prompt for Polarity Classification
Example of a Simple Prompt for Polarity Classification
A mobile app development team wants to analyze user feedback from their app store page. They plan to build a system that automatically assigns one of the following labels to each user review: 'Pleased', 'Displeased', or 'Suggestion'. How does this business objective relate to the task of polarity classification?
A company is analyzing customer feedback. Match each piece of feedback to the sentiment category it best represents.
Example of a Negative Input for Polarity Classification (Service Experience)
Constraining LLM Output with a Direct Command
Evaluating a Sentiment Classification System
You’re building a single API endpoint that returns...
Your team is implementing a polarity text-classifi...
You’re launching a sentiment (polarity) classifica...
Create a Dual-Backend Polarity Classification Spec (BERT + Prompt-Completion) with Label Mapping
Designing a Robust Polarity Classifier: BERT vs Prompt-Completion and the Label-Mapping Contract
Choosing and Operationalizing a Sentiment Classifier Under Real Production Constraints
Debugging a Sentiment Pipeline: When Prompt-Completion and Label Mapping Disagree with a BERT Classifier
Designing a Consistent Polarity Classification Service Across BERT and Prompt-Completion Outputs
Stabilizing a Polarity Classifier When Migrating from BERT to Prompt-Completion
Unifying Sentiment Labels Across a BERT Classifier and a Prompt-Completion LLM
Example of a Few-Shot Prompt for Polarity Classification
Learn After
A developer is trying to use a large language model to classify the sentiment of movie reviews. They provide the model with the following text, but the model consistently produces unreliable classifications for the final review.
Prompt Text:
Classify the sentiment of the following movie reviews as 'Positive' or 'Negative'.
Review: "This movie was a masterpiece. The acting was superb and the plot was thrilling." Sentiment: Positive
Review: "I was so bored I almost fell asleep. A complete waste of time." Sentiment: Negative
Review: "An incredible film with stunning visuals and a powerful story. Highly recommended!" Sentiment: Negative
Review: "I really enjoyed the clever dialogue and the surprising twists in the story." Sentiment:
What is the most likely reason the model is failing to provide an accurate classification for the final review?
Constructing a Few-Shot Prompt for Ticket Classification
A developer needs to use a large language model to classify incoming customer support tickets as 'Urgent' or 'Non-Urgent'. The goal is to achieve the highest possible accuracy for a new, unseen ticket. Which of the following prompt structures is best designed to guide the model toward the most reliable classification?