
Sign up to save your podcasts
Or


Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?
By Kyle Polich4.4
475475 ratings
Bilingual evaluation understudy (or BLEU) is a metric for evaluating the quality of machine translation using human translation as examples of acceptable quality results. This metric has become a widely used standard in the research literature. But is it the perfect measure of quality of machine translation?

32,220 Listeners

30,643 Listeners

288 Listeners

1,109 Listeners

630 Listeners

583 Listeners

308 Listeners

345 Listeners

207 Listeners

203 Listeners

313 Listeners

100 Listeners

552 Listeners

103 Listeners

229 Listeners