Using Pre-trained Knowledge when Retrieved Context is Insufficient
In RAG systems, when retrieved texts are insufficient to form a complete answer, an alternative strategy is to allow the LLM to generate a response using its own pre-trained knowledge. This approach contrasts with strictly limiting the model to the provided context or having it refuse to answer.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Using Pre-trained Knowledge when Retrieved Context is Insufficient
Restricting LLM Answers to Provided Text
A team is developing a question-answering system for a company's internal, highly-accurate technical manuals. The system's highest priority is to ensure that all answers are strictly based on the information found within these manuals and to avoid generating any information from its general knowledge. Given a user's question and a relevant passage retrieved from the manuals, which of the following instructions to the language model would be most effective at achieving this goal?
Diagnosing and Correcting RAG System Output
Evaluating Prompt Strategy for a Creative RAG Task
Learn After
A user asks a chatbot, which is designed to answer questions based on a specific corporate knowledge base, 'What were the key findings from the global tech summit that concluded yesterday?' The knowledge base was last updated a week ago and contains no information about the summit. The chatbot responds with a detailed, accurate summary of the summit's key findings. Which statement best evaluates this interaction?
RAG System Response Strategy
RAG Chatbot Response Strategy for Undocumented Issues