A developer is using a large language model to classify customer feedback into one of three categories: 'Positive', 'Negative', or 'Neutral'. The model correctly identifies the sentiment but often generates free-form text like 'The customer seems unhappy' instead of the specific label 'Negative'. This inconsistency is causing problems for a data analysis pipeline that expects one of the three exact labels. Which of the following approaches would be the most direct and reliable way to ensure the model always outputs one of the three predefined labels?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Formula for Label Selection via Probability Maximization
Example of a Prompt for Polarity Classification (Negative Sentiment)
Example of a Simple Prompt for Polarity Classification
A developer is using a large language model to classify customer feedback into one of three categories: 'Positive', 'Negative', or 'Neutral'. The model correctly identifies the sentiment but often generates free-form text like 'The customer seems unhappy' instead of the specific label 'Negative'. This inconsistency is causing problems for a data analysis pipeline that expects one of the three exact labels. Which of the following approaches would be the most direct and reliable way to ensure the model always outputs one of the three predefined labels?
Automating Support Ticket Classification
Mechanism of Constrained Prediction