Relation

Few-Shot and Zero-Shot Learning

Most DL models are supervised models that require large amounts of domain labels. In practice, it is expensive to collect such labels for each new domain. Fine-tuning a PLM (e.g., BERT and OpenGPT) to a specific task requires much fewer domain labels than training a model from scratch, thus opening opportunities of developing new zero-shot or few-shot learning methods based on PLMs.

0

1

Updated 2022-06-04

Contributors are:

Who are from:

Tags

Data Science