Learn Before
Concept

Adding a language model

An encoder-decoder model is basically a conditional language model, so encoder-decoders implicitly must learn a language model for the output domain of letters from training data. However, the training data, which is generally speech paired with text transcriptions, may not include as much text as you need to train a good language model, since it’s easier to find large amounts of pure text training data than it is to find text paired with speech. Therefore, a model for ASR can usually be improved by incorporating a very large language model.

0

1

Updated 2022-05-08

Contributors are:

Who are from:

Tags

Deep Learning (in Machine learning)

Data Science

Related