Learn Before
Explaining an LLM's Reasoning Error
A user provides a language model with the following prompt: 'There are three friends standing in a line: Alice, Bob, and Charlie. Alice is behind Bob. Charlie is in front of Bob.' The user then asks, 'Who is in the middle?' The model incorrectly responds, 'Alice.' Explain the most likely underlying reason for this type of error, given that the model's knowledge is based solely on its training data.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.3 Prompting - Foundations of Large Language Models
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Challenging Reasoning Tasks for LLMs
Self-Refinement in LLMs
Model Ensembling for Text Generation
Output Ensembling
Retrieval-Augmented Generation (RAG)
LLM Tool Use with External APIs
Evolution of the Concept of Alignment in NLP
Analyze the two scenarios below, each showing an incorrect output from a language model. Which scenario provides the clearest example of a failure caused by the model's lack of implicit knowledge, rather than a simple factual error in its training data?
Analyzing an LLM's Reasoning Failure
Limitations of Pre-trained Knowledge in Standard LLMs
Explaining an LLM's Reasoning Error