Short Answer

Create a Dual-Backend Polarity Classification Spec (BERT + Prompt-Completion) with Label Mapping

You are designing a single internal API endpoint, POST /polarity, that must return exactly one label from {POSITIVE, NEGATIVE, NEUTRAL} for each incoming customer message. For cost and latency reasons, the service will use two backends: (1) a fine-tuned BERT single-text classifier that outputs a probability distribution over {POSITIVE, NEGATIVE, NEUTRAL}, and (2) a prompt-completion LLM that returns free-form text (sometimes a single word like “positive”, sometimes a sentence like “Overall the tone is mixed but leans negative.”). Create a concise design spec for this endpoint that includes: (a) the prompt template you will use for the LLM to elicit a completion suitable for polarity classification, (b) a label-mapping strategy that deterministically converts the LLM’s completion into one of the three labels (including how you handle outputs that do not contain the exact label words), and (c) a decision policy for when to trust BERT vs the LLM (or how to combine them) that explicitly uses BERT’s probabilities and the mapped LLM label to produce the final label. Your answer must be specific enough that an engineer could implement it without additional clarification.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models

Ch.1 Pre-training - Foundations of Large Language Models

Related