Learn Before
Improving LLM Reasoning with Step-by-Step Demonstrations
A Large Language Model's ability to reason can be enhanced by providing it with a detailed, step-by-step reasoning process for a comparable problem. This demonstration enables the model to learn how to construct its own logical problem-solving path, increasing its chances of reaching the correct solution for new tasks.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Effect of 'Thinking' Prompts on LLM Performance
Chain-of-Thought (COT) Prompting
Multi-Round Interaction to Guide LLM Reasoning
Example of a Prompt for a Direct Mathematical Calculation
Example of a Prompt for Calculating the Average of 1, 3, 5, and 7
Example of a Prompt for Calculating the Mean Square
Improving LLM Reasoning with Step-by-Step Demonstrations
In-Context Learning (ICL)
A user is trying to get a Large Language Model (LLM) to solve a complex word problem that involves multiple calculations. Their initial prompt, 'What is the answer to this problem? [Problem text]', results in a quick but incorrect numerical answer. The user then revises the prompt to: 'First, break down the problem into the necessary steps. Then, solve each step, showing your work. Finally, state the final answer. [Problem text]'. This revised prompt leads to a correct solution. Which principle of interacting with LLMs does this scenario best illustrate?
Evaluating Prompt Strategies for a Logic Puzzle
Prompting for a Reasoning Process to Mitigate Errors in Complex Tasks
Example of a Prompt for Calculating the Average of 2, 4, and 9
Improving LLM Problem-Solving by Demonstrating Reasoning Steps
The Mechanism of Reasoning Prompts
Example of a Prompt with Detailed Reasoning Steps
Learn After
An engineer is using a large language model to solve multi-step physics problems. The model consistently makes logical errors and provides incorrect final answers. The engineer's goal is to improve the model's ability to reason through the problems correctly. Which of the following prompt strategies would be the most effective way to achieve this?
Improving LLM Performance on a Complex Summarization Task
Critique of a Reasoning Demonstration Prompt