Learn Before
Improving Prompt Specificity for Automated Data Extraction
A researcher is using a language model to automate the process of verifying whether a medical statement is supported by a given scientific abstract. They use the prompt below:
Abstract: [Abstract text here]
Statement: [Statement text here]
Is the statement supported by the abstract?
The model consistently returns nuanced, paragraph-long answers like, 'The abstract suggests a correlation, but does not definitively prove the statement.' While informative, these responses are difficult to process automatically.
Identify the primary weakness in the prompt that leads to these unhelpful responses. Then, rewrite the prompt to constrain the model's output to a clear, categorical format that would be easier to parse programmatically.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of Defining Answer Semantics for Grammaticality Judgment
Example of Defining Category Semantics in a Polarity Classification Prompt
A data scientist is using a language model to classify customer feedback into 'Bug Report' or 'Feature Request'. Their initial prompt is:
Feedback: 'The app crashes when I try to upload a photo.' What kind of feedback is this?They observe that the model's outputs are inconsistent, including responses like 'This is a bug report,' 'It seems like a bug,' and 'The user is reporting a problem with the app.' Which of the following revised prompts best addresses this inconsistency by explicitly defining the required output format and the meaning of the categories?Improving Prompt Specificity for Automated Data Extraction
Refining a Prompt for Feature Request Identification
Example of a Constraint-First Prompt for Grammaticality Judgment