Concept

Fine-Tuning Strategies

Besides standard fine-tuning, there are also some useful fine-tuning strategies.

  • Two-stage fine-tuning: It introduces an intermediate stage between pretraining and fine-tuning. In the first stage, the PTM is transferred into a model fine-tuned by an intermediate task or corpus. In the second stage, the transferred model is fine-tuned to the target task.

  • Multi-task fine-tuning: Fine-tuning the model across multiple tasks allows sharing information between the different tasks and positive transfer to other related tasks.

  • Fine-tuning with extra adaptation modules: The main drawback of fine-tuning is its parameter inefficiency: every downstream task has its own fine-tuned parameters. Therefore, a better solution is to inject some fine-tunable adaptation modules into PTMs while the original parameters are fixed.

0

1

Updated 2025-10-06

Tags

Data Science

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related