Example of a Final LLM-Generated Answer Using a Tool
After an external tool, such as a web search, returns the required information, the Large Language Model processes this data to formulate a final answer. This output can range from a direct statement, like 'So the answer is: Los Angeles', to a more complete, user-friendly sentence, such as 'The 2028 Olympics will be held in Los Angeles.'
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Example of a Final LLM-Generated Answer Using a Tool
A language model is tasked with answering a question that requires external information. Below are four distinct internal actions that occur. Arrange these actions in the correct chronological sequence from first to last.
A large language model is asked: 'What is the capital of Australia and what is its population?' The model begins generating its response and produces the following internal state:
The capital of Australia is Canberra. To find the population, I will search online. {tool: web-search, query: "population of Canberra"}The web search tool then executes and returns the text:
The population of Canberra in 2023 was 467,194.What will the model's internal context be immediately after this result is integrated, right before it generates the final part of its answer?
Analyzing a Tool Execution Failure
Learn After
A user asks a language model: 'Who won the Best Picture award in 2023 and what was the movie about?' The model uses an external tool to retrieve information, and receives the following data: '[Search Result]: The 2023 Academy Award for Best Picture was won by "Everything Everywhere All at Once". The film is a sci-fi action-adventure about an exhausted Chinese-American woman who discovers she must connect with parallel universe versions of herself to prevent a powerful being from destroying the multiverse.' Based on this retrieved data, which of the following is the most appropriate final response for the model to generate?
Synthesizing Information for a Final Answer
Evaluating LLM Response Styles from Tool Output