Activity (Process)

Prompting LLMs with Retrieved Texts in RAG

In the Retrieval-Augmented Generation (RAG) framework, the final prediction is generated by prompting a Large Language Model (LLM). This process involves feeding the model an input that combines both the original user query and the relevant texts retrieved from an external information source.

0

1

Updated 2026-05-02

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related
Learn After