Restricting LLM Answers to Provided Text
When the provided context is highly reliable, it is possible to design prompts that strictly limit a Large Language Model to generate answers using only the given text. This technique ensures that the model's output is entirely based on the trustworthy source material.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Prompt Example for Synthesizing Answers from Provided Context
Restricting LLM Answers to Provided Text
Prompting LLMs with Retrieved Texts in RAG
A developer provides a language model with a specific piece of information:
Context: 'The fictional city of Aeridor was founded in the year 982 by Queen Elara.'The developer then asks the model:Question: 'When was Aeridor founded?'The model, however, responds with:Answer: 'Aeridor was founded in the 12th century.'Which of the following statements best analyzes the most likely reason for the model's incorrect response, despite being given the correct information?Internal Knowledge Base Chatbot Design
A developer is building a customer support chatbot that must answer questions using only the information from the company's official 'Return Policy' document to avoid providing inaccurate or outdated advice. Which of the following prompt strategies is the most effective for constraining the model's output to the provided text?
Using Pre-trained Knowledge when Retrieved Context is Insufficient
Restricting LLM Answers to Provided Text
A team is developing a question-answering system for a company's internal, highly-accurate technical manuals. The system's highest priority is to ensure that all answers are strictly based on the information found within these manuals and to avoid generating any information from its general knowledge. Given a user's question and a relevant passage retrieved from the manuals, which of the following instructions to the language model would be most effective at achieving this goal?
Diagnosing and Correcting RAG System Output
Evaluating Prompt Strategy for a Creative RAG Task