Learn Before
Concept

Neural contextual encoders

Most of the encoders can be categorized as sequence and non-sequence models.

  • Sequence models: Capture a word's local context in sequential order. Examples: convolutional models (capture the meaning of words by aggregating information from neighbors), recurrent models (capture contextual representations with short memory). They learn the contextual representation of the word with locality bias, but are easy to train.
  • Non-sequence models: Learn the contextual representation with a pre-defined tree or graph structure between words. Example: Fully-connected self-attention model (Use a fully-connected graph to model the relation of every two words and let the model learn the structure by itself).
Image 0

0

1

Updated 2022-05-16

Tags

Data Science