Learn Before
Learning During Inference
Learning during inference describes the ability of a model, such as an LLM, to learn how to perform a new task at the moment of prediction, guided by the information presented in the input. In-context learning is a key method that exemplifies this, as the model learns from demonstrations provided directly within the prompt.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Learning During Inference
A technology firm aims to develop three separate AI tools: one to summarize scientific research papers, another to generate creative story plots, and a third to function as a conversational chatbot for customer support. Which development approach best leverages the fundamental capabilities of a modern, large-scale, pre-trained language model, and why?
AI Development Strategy Analysis
The Source of LLM Versatility
Learn After
A developer provides a large language model with the following input text:
`Translate the following user requests into a structured command format.
Example 1: Request: "Set a timer for 10 minutes" Command: {"action": "set_timer", "duration_minutes": 10}
Example 2: Request: "What's the weather in London?" Command: {"action": "get_weather", "location": "London"}
Now, process this request: Request: "Play the new song by The Weeknd"`
The model correctly outputs:
{"action": "play_music", "artist": "The Weeknd"}.Which statement best analyzes the primary mechanism the model used to generate the correct command for the new request?
In-Context Learning (ICL)
True or False: When a large language model successfully performs a novel task after being shown examples within a single prompt, this indicates that the model has undergone a permanent update to its internal weights, effectively 'training' it on the new task for all future interactions.
Analyzing Model Behavior Across Sessions