Relation

Domain adaptation

Most PLMs are trained on general-domain text corpora, If the target domain is dramatically different from a general domain, we might consider adapting the PLM using in-domain data by continual pre-training the selected general-domain PLM. For domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch might also be a good choice.

0

1

Updated 2022-05-29

Contributors are:

Who are from:

Tags

Data Science