Learn Before
Prompting as a Text Generation Task
A common approach to utilizing Large Language Models is to describe a desired task in text and then prompt the model to generate a response based on that description. This method effectively frames problem-solving as a standard text generation task, where the model's objective is to continue or complete the text starting from the provided context.
0
1
Tags
Foundations of Large Language Models
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Zero/Few-Shot Learning
A team is tasked with adapting a large, pre-trained language model to summarize legal documents. One developer designs a method where each summarization request includes a detailed set of instructions and examples of high-quality summaries, which are provided to the original, unchanged model. Another developer uses a large dataset of legal documents and their corresponding summaries to make small, permanent adjustments to the model's internal configuration before deploying it. What is the most significant difference between these two approaches regarding the pre-trained model itself?
Choosing a Model Adaptation Strategy
Key Areas of Prompt Engineering
Instruction-Following Ability of LLMs
Components of a Prompt: Instruction and User Input
When a language model successfully performs a new task based on a well-crafted prompt, its internal parameters are temporarily adjusted for the duration of that specific task to better align with the provided instructions.
Prompting as a Text Generation Task