Learn Before
Executing Tool Calls and Integrating Results in LLMs
When a Large Language Model generates a specific string formatted as a tool call, it triggers an execution process. The model runs the designated tool, such as a web search, with the provided query. The output from this tool then replaces the original tool call string in the model's context. This newly inserted information is used by the LLM in subsequent prediction steps to formulate a correct and contextually aware answer.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Executing Tool Calls and Integrating Results in LLMs
A user asks a large language model, 'What is the current stock price of ExampleCorp (ticker: EXM)?'. The model has access to a tool designed to fetch stock prices which requires a company's ticker symbol. Which of the following represents the most appropriate and correctly structured command the model would generate to use this tool?
Evaluating LLM Tool Use Efficiency
Constructing a Tool Call for a Calendar API
Learn After
Example of a Final LLM-Generated Answer Using a Tool
A language model is tasked with answering a question that requires external information. Below are four distinct internal actions that occur. Arrange these actions in the correct chronological sequence from first to last.
A large language model is asked: 'What is the capital of Australia and what is its population?' The model begins generating its response and produces the following internal state:
The capital of Australia is Canberra. To find the population, I will search online. {tool: web-search, query: "population of Canberra"}The web search tool then executes and returns the text:
The population of Canberra in 2023 was 467,194.What will the model's internal context be immediately after this result is integrated, right before it generates the final part of its answer?
Analyzing a Tool Execution Failure