Case Study

Stabilizing a Decomposition-Based LLM Workflow for a Regulated Customer-Email Triage System

You are leading an internal team deploying an LLM to triage inbound customer emails for a regulated financial-services product. The system must (1) classify each email into one of five queues (Billing, Technical Issue, Account Access, Complaint, Potential Fraud), (2) extract any required follow-up questions to ask the customer, and (3) produce a short agent-facing rationale that cites only information present in the email. The current design uses a fixed list of sub-problems generated up front and then solved in order. In production, you observe two recurring failures: (a) the model asks irrelevant follow-up questions because it forgets constraints discovered earlier (e.g., it already determined the customer is locked out but later asks for billing details), and (b) some emails contain nested issues (e.g., a complaint that includes a potential fraud indicator) that the fixed decomposition misses, leading to misrouting.

As the owner, propose a revised end-to-end LLM interaction plan that addresses both failures by combining: (i) how you will generate sub-problems, (ii) how you will solve them sequentially while preserving intermediate results, and (iii) when and how you will trigger recursive decomposition for a sub-problem that turns out to be too complex or multi-faceted. Your answer must include at least one concrete example of a contextual QA pair you would carry forward and explain how it changes the next sub-problem’s prompt or decision.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related