Man versus machine—the contentious struggle has existed for many years and in the last several decades it has come to a head in the translation services industry. Machine translation, the automated process by which computer software is used to fully translate a source language into a target language, has been the subject of great interest and experimentation.  Whereas conventional machine translation approaches have largely operated through statistical methods based upon large text corpora, more advanced neural machine translation approaches developed by companies like Google and Microsoft are using deep learning techniques to provide more robust and accurate translations. But when the heat is on, which translator is more reliable: human or machine?

Just last month, in a scene right out of Rocky IV, three advanced machine translation programs were pitted against a hardy group of human translators. The three programs, represented by Google, Systran, and Papago, competed against four human translators to see who could better translate a set of texts comprised of news articles and novel excerpts in both English and Korean. Inevitably, the machine translation programs completed their translations within a few minutes, solidly beating their human counterparts, who took nearly an hour. It was the human translators, however, who won the day, routing the machine translators in terms of translation quality. The human translators scored 49 out of 60 points, whereas the machine translation programs’ scores ranged from 17 to 28. The organizers noted that the machine translators performed especially poorly when translating literature and failed to revise their own translations.

This competition serves as a reminder that, despite the growing prevalence of automated machine translation, there’s still nothing that beats the deft touch (or the Eye of the Tiger) of the human translator.