Translated by Charlotte Doane
In recent years, massive strides in machine translation have led many media outlets to proclaim the death of the translator. Translation engines have made excellent progress in quality and efficiency, and in record time, but they are still far from rivalling the human capacity for critical thinking.
Scientific, legal and technical works are just as much a product of culture as any work of fiction.
These announcements are sometimes triumphant and sometimes alarmist, describing the phenomenal evolution of MT and the dangers it poses for the translation profession. However, they are generally prompted by limited trials in specific fields of study. One such test was conducted by a team at Microsoft, who used their own AI system to translate newspaper excerpts from Chinese to English and dubbed it as effective as a human. Nicolas Bousquet—former scientific director at Quantmetry, the company that collaborated with DeepL to translate the reference text Deep Learning—thinks that MT is indeed a threat to people who translate “vacuum cleaner instructions,” but not to literary translators.
However, this simplistic view draws a hard line between literature and everything else—a binary of lawnmower manuals and novels, with nothing in between. The reality is more complex. Scientific, legal and technical works are just as much a product of culture as any work of fiction; they contain multiple references and are often written in a particular style. Machines fail to grasp these implicit allusions that a human being recognizes instantaneously because they do not understand the world around them. This has recently become apparent in documents about COVID-19, as computers do not recognize it as an illness. Current systems completely lack common sense, and it remains to be seen whether artificial intelligence will ever decode that particular mystery.
Many factors must be considered that exist outside the text entirely, such as target audience, client preferences, local particularities, etc. All of this means that machine translation remains entirely reliant on human assistance.
There is a more concrete problem in the way these engines work, translating each sentence in isolation. Many linguistic considerations exist across sentences and can only be understood in context. A human translator can scan a text for clues that determine things like whether the neutral pronoun it in English is referring to a noun that is masculine or feminine in French, something a machine cannot necessarily figure out. Even if it manages to avoid such grammatical errors, MT output often lacks stylistic coherence. For example, a translation engine may alternately translate maladie as disease, illness, disorder or affliction. And unlike translators, one thing machines never do is provide sources for their choices. This can make revising their translations extremely labour-intensive and even counterproductive, when the same results can be achieved with the efficient translation-assistance software that has been used for decades.
Quality MT output also requires that the engine be trained on a reliable and sizeable corpus, which poses a significant challenge when translators want to adapt a system to a particular project, client, content niche or language pair. There are many more texts available in French and English than in Bulgarian and Malay, for example, so machine translation between certain pairs is much more difficult to achieve. Even when a suitable corpus does exist, most of the texts come from official publications by international organizations like the UN, which is useless for teaching engines how to translate speech, advertising catchphrases, or niche content. For any type of text, many factors must be considered that exist outside the text entirely, such as target audience, client preferences, local particularities, etc. All of this means that machine translation remains entirely reliant on human assistance—from someone who knows what they’re doing.
Post-editing is no small task and requires a keen eye for the practical aspects of language as well as a thorough understanding of the theoretical, cultural and technological elements at play.
Skilled human intervention is vital at multiple stages of the MT process, beginning with continually feeding the engine a high-quality body of work that is up to date with the constant evolution of human language and knowledge. A human also has to check the translations that the engine produces, especially when translators are using MT in their work. This verification step, known as post-editing, consists in revising the text to eliminate errors and clichés resulting from the automatic nature of machine translation. Post-editing is no small task and requires a keen eye for the practical aspects of language as well as a thorough understanding of the theoretical, cultural and technological elements at play. Hence the general prerequisite of three to five years of translation experience.
So the question is, how do we integrate post-editing into the professional translation world, and how do we train the next generation? For now, language professionals young and not so young are adjusting at their own pace, using machine translation in conjunction with their better judgement. We have learned that it is a good practice to have someone with more experience revise younger post-editors’ work. All in all, it is clear that MT is taking up more and more space in the translation toolbox and putting our professional skills to the test.
Increased access to knowledge from a variety of languages and cultures could balance out the current hegemony of English and allow other linguistic communities to be heard, strengthening a breadth of leaders in global information dissemination and knowledge sharing.
While translator training should certainly include new available technologies, it should always prioritize cognitive skills, such as critical thinking, the various levels of communication and the plurality of knowledge. Students should be trained in areas of expertise where machine translation fails, not only literature but also in-depth analysis, interviews, marketing, and other areas. The emphasis must always be on added value.
One of the authors of Deep Learning, Yoshua Bengio, has posited another interesting possibility for the future of translation. Machine translation may lead to an explosion of translation around the world, enabling the revitalization of many languages, especially in scientific and technical writing. Increased access to knowledge from a variety of languages and cultures could balance out the current hegemony of English and allow other linguistic communities to be heard, strengthening a breadth of leaders in global information dissemination and knowledge sharing.
No, the progress of machine translation itself is not what poses a threat to translators, because we already know how to put it to good use for our work and for the future of our profession. The real danger is in MT causing translation to lose its perceived value in the eyes of the general public and of our clients, leading to generally lower wages in the search for infinite profitability. Our greatest challenge lies in rectifying this perception, which is too often based on preconceived notions of our work, and in communicating to others how complex and multifaceted translation really is.