Concept

Adaptor Layers in Parameter-Efficient Fine-Tuning

A widely-used parameter-efficient fine-tuning approach involves adding an adaptor layer between the existing layers of a Large Language Model. This method permits the fine-tuning of only the added adaptor layer for specific tasks, leaving the underlying model architecture unaltered and avoiding the need to retrain the entire model.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Foundations of Large Language Models

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Learn After