Learn Before
Concept

Self-Reflection in LLMs

Self-reflection in Large Language Models is a concept analogous to human introspection, where the model evaluates its own outputs. It is believed that if LLMs can self-reflect, they can achieve greater accuracy and develop self-correction capabilities, thereby improving their predictions.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related