Multiple Choice

A large language model is tasked with solving a complex multi-step logic puzzle. It is prompted to generate its reasoning one step at a time in a linear sequence. The model consistently fails to find the correct solution. Analysis of its outputs reveals that it often makes an early, plausible-but-incorrect assumption. Even when later steps in its reasoning lead to a clear contradiction, the model does not go back to revise its initial assumption and continues to build upon the flawed foundation. What is the most likely reason for this type of failure?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science