Concept

Neural Sequence-to-Sequence

In this approach, SS is modeled as a sequence-to-sequence problem, and tackled normally with an attention-based encoder-decoder architecture. The encoder projects the source sentence into a set of continuous vector representations from which the decoder generates the target sentence.

A major advantage of this approach is that it allows training of end-to-end models without needing to extract features or estimate individual model components, such as the language model. In addition, all simplification transformations can be learned simultaneously, instead of developing individual mechanisms as in previous research.

0

1

Updated 2025-10-07

Tags

Data Science

Foundations of Large Language Models Course

Computing Sciences