Predicting Prompting Pitfalls
You are designing a prompt for a large language model to solve a multi-step logic puzzle. To ensure the model shows its work, you append the phrase 'Let's think step by step' to your query. Describe a common, undesirable outcome related to the completeness of the model's response when using this technique, and explain why this outcome would be unhelpful to a user who needs a final, clear solution.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Answer Extraction via Second-Round Prompting
A user gives a large language model the following prompt:
"Prompt: At a conference, there are 3 presentation rooms. Each room has 8 rows of chairs, and each row has 10 chairs. If all chairs are filled, how many people are attending the presentations? Let's think step by step."
The model provides the following output:
"Output: Okay, let's break this down. First, we need to find the total number of chairs in one room. There are 8 rows with 10 chairs each, so 8 rows * 10 chairs/row = 80 chairs per room. Then, we need to find the total number of chairs across all rooms. There are 3 rooms, so 3 rooms * 80 chairs/room = 240 chairs."
Analyze the model's output. What is the primary issue with its response?
Evaluating LLM Response Completeness
Predicting Prompting Pitfalls