
Sign up to save your podcasts
Or
Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?
4.4
473473 ratings
Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?
585 Listeners
624 Listeners
298 Listeners
340 Listeners
141 Listeners
770 Listeners
270 Listeners
186 Listeners
63 Listeners
298 Listeners
91 Listeners
106 Listeners
201 Listeners
72 Listeners
496 Listeners