Case Study

Unifying Sentiment Labels Across a BERT Classifier and a Prompt-Completion LLM

You are rolling out a polarity (positive/negative/neutral) classifier for customer chat transcripts. For latency reasons, the product team wants to use a fine-tuned BERT single-text classifier in the primary path, but also wants an LLM prompt-completion fallback when the BERT model’s top probability is below 0.55. The analytics team requires that downstream dashboards see a single, consistent label set {positive, negative, neutral} regardless of which model produced the decision, and they will reject any output that cannot be deterministically mapped into exactly one of those three labels.

Image 0

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models

Ch.1 Pre-training - Foundations of Large Language Models

Related