
Sign up to save your podcasts
Or
Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?
4.4
472472 ratings
Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?
162 Listeners
1,017 Listeners
591 Listeners
443 Listeners
297 Listeners
322 Listeners
142 Listeners
765 Listeners
265 Listeners
191 Listeners
87 Listeners
373 Listeners
199 Listeners
76 Listeners
442 Listeners