Learn Before
When a large language model successfully solves a new problem after being shown several examples within a single prompt, it is because the model's underlying weights have been permanently updated to incorporate the new problem-solving pattern.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.3 Prompting - Foundations of Large Language Models
Ch.2 Generative Models - Foundations of Large Language Models
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of a Demonstration for In-Context Learning
Calculation Annotation in LLM Prompts
Example of a Demonstration for Sentiment Classification (Positive)
Example of a Demonstration for Sentiment Classification (Negative)
An AI developer provides a large language model with the following prompt: 'First, here are two examples of converting a sentence into a question. Example 1 Input: 'The cat is on the mat.' Example 1 Output: 'Is the cat on the mat?' Example 2 Input: 'They are running a race.' Example 2 Output: 'Are they running a race?' Now, using this pattern, convert the following sentence into a question: 'She is writing a book.' The model successfully outputs: 'Is she writing a book?' Which statement best analyzes the underlying mechanism that allowed the model to succeed?
Improving LLM Output Consistency
When a large language model successfully solves a new problem after being shown several examples within a single prompt, it is because the model's underlying weights have been permanently updated to incorporate the new problem-solving pattern.