Learn Before
An NLP model is trained on a dataset of commands. The training data includes 'walk left', 'walk right', 'run left', 'run right', 'jump twice', and 'jump three times'. The model performs perfectly on these commands. However, when tested on the new, unseen command 'jump left', the model fails. What does this failure most likely indicate about the model?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Compositional Reasoning Tasks for LLMs
Diagnosing a Model's Language Limitation
An NLP model is trained on a dataset of commands. The training data includes 'walk left', 'walk right', 'run left', 'run right', 'jump twice', and 'jump three times'. The model performs perfectly on these commands. However, when tested on the new, unseen command 'jump left', the model fails. What does this failure most likely indicate about the model?
The Challenge of Novelty in Language Models