Learn Before
Motivation for Parameter-Efficient Fine-Tuning
While updating all of a Large Language Model's parameters (full fine-tuning) is a standard adaptation technique and less intensive than pre-training, it remains a practically expensive process. This high computational cost has spurred the development of parameter-efficient fine-tuning approaches, which seek to adapt models by modifying only a minimal number of parameters.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Computational Cost of Standard Fine-Tuning
A team is adapting a large, pre-trained language model for a specialized task: summarizing legal documents. They choose an adaptation strategy that involves re-training on the legal dataset and allowing every single parameter within the original model to be updated during this process. Which statement best analyzes a direct consequence of this specific approach?
Evaluating a Model Adaptation Strategy
Motivation for Parameter-Efficient Fine-Tuning
Analyzing a Model Adaptation Strategy
Learn After
A small startup with limited computational resources wants to customize a very large, general-purpose language model to handle specific queries for their niche e-commerce business. Their lead engineer proposes a plan to retrain all layers of the model on their custom dataset. Which of the following provides the most accurate evaluation of this proposal?
Adapting a Large Model on a Budget
The Scalability Problem of Model Adaptation