Concept

Using Pre-trained Knowledge when Retrieved Context is Insufficient

In RAG systems, when retrieved texts are insufficient to form a complete answer, an alternative strategy is to allow the LLM to generate a response using its own pre-trained knowledge. This approach contrasts with strictly limiting the model to the provided context or having it refuse to answer.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences