Learn Before
Diagnosing and Correcting Model Tuning Issues
A machine learning team adapts a large, pre-trained language model for a new, highly specific task using a 'Promptless Fine-tuning' approach on a small dataset. After tuning, they notice the model performs well on the new task but has significantly lost its ability to perform general tasks it was originally good at. Explain the likely cause of this performance degradation and recommend a more suitable tuning strategy that would preserve the model's original capabilities.
0
1
Tags
Data Science
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Advantages of Promptless Fine-tuning
Disadvantages of Promptless Fine-tuning
Advantages of Tuning-free Prompting
Disadvantages of Tuning-free Prompting
Advantages of Fixed-LM Prompt Tuning
Disadvantages of Fixed-LM Prompt Tuning
Advantages of Fixed-prompt LM Tuning
Disadvantages of Fixed-prompt LM Tuning
Advantages of Prompt+LM Tuning
Disadvantages of Prompt+LM Tuning
Fine-tuning LLMs with Labeled Data
Standard Fine-Tuning
Selecting an Efficient Model Tuning Strategy
A key distinction between different methods for adapting a large language model is which components are modified versus which are kept fixed. Match each tuning strategy with the description of its core mechanism.
A research team is tasked with adapting a very large, pre-trained language model for a highly specialized task. They have access to a small, curated dataset of fewer than 100 examples. Their two main constraints are minimizing computational costs during the adaptation process and preventing the model from losing its extensive general-world knowledge. Which of the following adaptation strategies best balances these requirements?
Diagnosing and Correcting Model Tuning Issues