Learn Before
A researcher provides a large language model with several examples of a novel, made-up mathematical operation within a single prompt. The model then correctly applies this operation to new numbers. This success demonstrates that the model has permanently updated its internal parameters to learn this new mathematical rule.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A large language model was pre-trained on a vast dataset including texts in both Japanese and English. Without any changes to its internal parameters, it is given a prompt containing a few examples of English-to-Japanese translation (e.g., 'water -> 水', 'fire -> 火'). The model then correctly translates a new, unseen word: 'mountain -> 山'. Which of the following statements provides the most accurate explanation for this phenomenon?
A researcher provides a large language model with several examples of a novel, made-up mathematical operation within a single prompt. The model then correctly applies this operation to new numbers. This success demonstrates that the model has permanently updated its internal parameters to learn this new mathematical rule.
Explaining Differential Performance in Language Models