Learn Before
Concept

Providing Reference Information in Prompts

Beyond using demonstrations for in-context learning, prompts can be enhanced by incorporating any form of relevant text to create an enriched context. This technique leverages the advanced language understanding of LLMs, enabling them to generate predictions based on the specific information supplied. A key application of this method is to constrain the model's output, ensuring that its responses are confined to the provided text rather than being unconstrained predictions.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related