Concept

Constraining LLM Outputs with Provided Text

A common application of providing reference information in prompts is to constrain the output of a Large Language Model. By supplying relevant text, the model is guided to generate responses that are grounded in and confined to the provided information, rather than making unconstrained predictions based on its general knowledge.

0

1

Updated 2026-04-29

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences