Learn Before
MT Evaluation
Translations are evaluated along two dimensions:
- adequacy: how well the translation captures the exact meaning of the source sentence.
- fluency: how fluent the translation is in the target language.
The most accurate evaluations use human raters. An alternative is to do ranking: give the raters a pair of candidate translations, and ask them which one they prefer. While humans produce the best evaluations of machine translation output, running a human evaluation can be time consuming and expensive. For this reason automatic metrics are often used.
0
1
Tags
Data Science
Related
Application of autoregressive generation given a prefix: Machine translation
Statistical Machine Translation vs Neural Machine Translation
Backtranslation
MT Evaluation
MT Corpora
Assessing Translation Effectiveness for a Specific Use Case
A company is developing a translation service for legal documents, where preserving the precise meaning and complex sentence structure of the original text is the highest priority. The company has access to a massive parallel corpus of legal texts. Given these requirements, which approach would be more suitable and why?
Evaluating Machine Translation Quality
Unaligned Data in Sequence Learning