Essay

Designing a Robust Polarity Classifier: BERT vs Prompt-Completion and the Label-Mapping Contract

You are launching a customer-feedback analytics feature that must assign exactly one sentiment label to each incoming message: {positive, negative, neutral}. You have two candidate implementations:

A) Fine-tune a BERT-style single-text classifier that uses the [CLS] representation and a softmax head to output class probabilities.

B) Use an LLM with classification via prompt completion (e.g., a cloze-like prompt that elicits a short completion), then apply a label-mapping layer that converts the model’s generated text into one of {positive, negative, neutral}.

In a 1–2 page response, recommend one approach for production and justify your choice by explicitly analyzing how (i) the nature of text classification and polarity classification, (ii) the mechanics of BERT single-text classification, and (iii) prompt-completion outputs plus label mapping interact to affect reliability and operational risk. Your answer must include: (1) a concrete example prompt you would use if you choose approach B (or explain why you would avoid B), (2) a proposed label-mapping strategy that handles both “direct label word” outputs (e.g., “negative”) and descriptive outputs (e.g., “This sounds frustrated with the service”), and (3) at least two failure modes unique to your non-chosen approach and how they would show up in real customer messages.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.2 Generative Models - Foundations of Large Language Models

Ch.1 Pre-training - Foundations of Large Language Models

Related