Learn Before
Activity (Process)

Retrieval-Augmented Generation Process

The Retrieval-Augmented Generation (RAG) process enhances a Large Language Model's (LLM) output by incorporating external knowledge. The process begins with an input context, such as a user's question (e.g., 'What is deep learning?'). This input is used as a query to search an external datastore for relevant information. The system retrieves the 'k' most similar pieces of content, known as nearest neighbors (e.g., 'Deep network is ...', 'Machine learning is ...'). Finally, these retrieved documents are combined with the original input to create an augmented message or prompt, which is then fed to the LLM to generate a more informed and contextually rich response.

Image 0

0

1

Updated 2025-10-10

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related