Debugging an LLM-based Classification Pipeline
Based on the provided case study, analyze the discrepancy between the model's output and the downstream code's expectation. Explain why the urgent_count is never incremented and propose a specific modification to the process to resolve the issue.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Computing Sciences
Foundations of Large Language Models Course
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A developer is building a system to classify customer reviews as 'Positive', 'Negative', or 'Neutral'. Instead of using a traditional classification model, they are prompting a large, general-purpose text generation model to perform the task. The model is given the review: 'The battery life on this new phone is incredible!' Which of the following potential model outputs best illustrates why a separate 'label extraction' step is often required in this approach?
Example of an LLM Generating a Descriptive Negative Output for Polarity Classification
Debugging an LLM-based Classification Pipeline
Interpreting Text Generation Model Outputs for Classification