Learn Before
A development team has built a system that retrieves relevant internal documents to help a language model answer employee questions. They observe that while the correct documents are being retrieved, the model's final answers often seem generic and do not effectively synthesize the specific details from the provided text. The model appears to be relying more on its pre-existing knowledge than the retrieved context. Which of the following strategies would most directly address this specific issue by training the model to better utilize the provided information?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Fine-Tuning LLMs to Refuse Answering in RAG
A development team has built a system that retrieves relevant internal documents to help a language model answer employee questions. They observe that while the correct documents are being retrieved, the model's final answers often seem generic and do not effectively synthesize the specific details from the provided text. The model appears to be relying more on its pre-existing knowledge than the retrieved context. Which of the following strategies would most directly address this specific issue by training the model to better utilize the provided information?
Improving a Customer Support RAG System
Comparing RAG Implementation Strategies