In the framework of learning a soft prompt via knowledge distillation to compress a longer context, match each component with its corresponding role in the process.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Formula for Soft Prompt Optimization by Minimizing Prediction Dissimilarity
Optimizing Language Model API Costs
A team is training a set of learnable, continuous parameters to serve as a compact substitute for a long, detailed textual instruction set for a language model. The goal is for these compact parameters to guide the model to produce the same quality of output as the original long instructions when given any user input. Which of the following best describes the core objective of this training process?
Characteristics of Teacher and Student Models in Knowledge Distillation
In the framework of learning a soft prompt via knowledge distillation to compress a longer context, match each component with its corresponding role in the process.