Learn Before
Concept

CL in Language Modeling

In principle, a large LM trained on a sufficiently large and diverse corpus is able to perform well across many datasets and domains. Research has concluded that continuous domain- and task-adaptive pre-training of LMs leads to performance gains in downstream NLP tasks. As a result, further research interest in LM-based methods for CL in NLP has recently spiked.

0

1

Updated 2022-08-21

Tags

Data Science