Analyzing a Tool Execution Failure
A user asks a large language model, 'What is the current weather in Paris?' The model's internal process generates a command to call a weather tool, but the tool returns the following text: API_LIMIT_EXCEEDED. The model must now generate a final response to the user. Based on your understanding of how tool results are integrated, what kind of response should the model logically produce, and why?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of a Final LLM-Generated Answer Using a Tool
A language model is tasked with answering a question that requires external information. Below are four distinct internal actions that occur. Arrange these actions in the correct chronological sequence from first to last.
A large language model is asked: 'What is the capital of Australia and what is its population?' The model begins generating its response and produces the following internal state:
The capital of Australia is Canberra. To find the population, I will search online. {tool: web-search, query: "population of Canberra"}The web search tool then executes and returns the text:
The population of Canberra in 2023 was 467,194.What will the model's internal context be immediately after this result is integrated, right before it generates the final part of its answer?
Analyzing a Tool Execution Failure