Could the language barrier actually fall within the next 10 years
Wouldn’t it be wonderful to travel to a foreign country without having to worry about the nuisance of communicating in a different language?
In a Wall Street Journal article, technology policy expert Alec Ross argued that, within a decade or so, we’ll be able to communicate with one another via small earpieces with built-in microphones.
No more trying to remember your high school French when checking into a hotel in Paris. Your earpiece will automatically translate “Good evening, I have a reservation” to Bon soir, j’ai une réservation – while immediately translating the receptionist’s unintelligible babble to “I am sorry, Sir, but your credit card has been declined.”
Ross argues that because technological progress is exponential, it’s only a matter of time.
Indeed, some parents are so convinced that this technology is imminent that they’re wondering if their kids should even learn a second language.
Max Ventilla, one of AltSchool Brooklyn’s founders, told The New Yorker.
Needless to say, communication is only one of the many advantages of learning another language (and I would argue that it’s not even the most important one).
Furthermore, while it’s undeniable that translation tools like Bing Translator, Babelfish or Google Translate have improved dramatically in recent years, prognosticators like Ross could be getting ahead of themselves.
As a language professor and translator, I understand the complicated nature of language’s relationship with technology and computers. In fact, language contains nuances that are impossible for computers to ever learn how to interpret.
Language rules are special
I still remember grading assignments in Spanish where someone had accidentally written that he’d sawed his parents in half, or where a student and his brother had acquired a well that was both long and pretty. Obviously, what was meant was “I saw my parents” and “my brother and I get along pretty well.” But leave it to a computer to navigate the intricacies of human languages, and there are bound to be blunders.
In 2016, when asked about Twitter’s translation feature for foreign language tweets, the company’s CEO Jack Dorsey conceded that it does not happen in “real time, and the translation is not great.”
Still, anything a computer can “learn,” it will learn. And it’s safe to assume that any finite set of data (like every single work of literature ever written) will eventually make its way into the cloud.
So why not log all the rules by which languages govern themselves?
Simply put: because this is not how languages work. Even if the Florida State Senate ruled that studying computer code is equivalent to learning a foreign language, the two could not be more different.
Programming is a constructed, formal language. Italian, Russian or Chinese – to name a few of the estimated 7,000 languages in the world – are natural, breathing languages which rely as much on social convention as on syntactic, phonetic or semantic rules.
Words don’t indicate meaning
As long as one is dealing with a simple written text, online translation tools will get better at replacing one “signifier” – the name Swiss linguist Ferdinand de Saussure gave to the idea that a sign’s physical form is distinct from its meaning – with another.
Or, in other words, an increase in the quantity and accuracy of the data logged into computers will make them more capable of translating “No es bueno dormir mucho” as “It’s not good to sleep too much,” instead of the faulty “Not good sleep much,” as Google Translate still does.
Replacing a word with its equivalent in the target language is actually the “easy part” of a translator’s job. But even this seems to be a daunting task for computers.
It’s so difficult for computers because translation doesn’t – or shouldn’t – involve simply translating words, sentences or paragraphs. Rather, it’s about translating meaning.So why do programs continue to stumble on what seem like easy translations?
And in order to infer meaning from a specific utterance, humans have to interpret a multitude of elements at the same time.
Think about all the contextual clues that go into understanding an utterance: volume, pitch, situation, even your culture – all are as likely to convey as much meaning as the words you use. Certainly, a mother’s soft-spoken advice to “be careful” elicits a much different response than someone yelling “Be careful!” from the passenger’s seat of your car.
So can computers really interpret?
As the now-classic book Metaphors We Live By has shown, languages are more metaphorical than factual in nature. Language acquisition often relies on learning abstract and figurative concepts that are very hard – if not impossible – to “explain” to a computer.
Since the way we speak often has nothing to do with the reality that surrounds us, machines are – and will continue to be – puzzled by the metaphorical nature of human communications.
This is why even a promising newcomer to the translation game like the website Unbabel, which defines itself as an “AI-powered human-quality translation,” has to rely on an army of 42,000 translators around the world to fine-tune acceptable translations.
You need a human to tell the computer that “I’m seeing red” has little to do with colors, or that “I’m going to change” probably refers to your clothes and not your personality or your self.
If interpreting the intended meaning of a written word is already overwhelming for computers, imagine a world where a machine is in charge of translating what you say out loud in specific situations.
The translation paradox
Nonetheless, technology seems to be trending in that direction. Just as “intelligent personal assistants” like Siri or Alexa are getting better at understanding what you say, there is no reason to think that the future will not bring “personal assistant translators.”
But translating is an altogether different task than finding the nearest Starbucks, especially with japanese translations when in Japan! This is because machines aim for perfection and rationality, while languages – and humans – are always imperfect and irrational.
This is the paradox of computers and languages.
If machines become too sophisticated and logical, they’ll never be able to correctly interpret human speech. If they don’t, they’ll never be able to fully interpret all the elements that come into play when two humans communicate.
Therefore, we should be very wary of a device that is incapable of interpreting the world around us. If people from different cultures can offend each other without realizing it, how can we expect a machine to do better?
Will this device be able to detect sarcasm? In Spanish-speaking countries, will it know when to use “tú” or “usted” (the informal and formal personal pronouns for “you”)? Will it be able to sort through the many different forms of address used in Japanese? How will it interpret jokes, puns and other figures of speech?
Unless engineers actually find a way to breathe a soul into a computer – pardon my figurative speech – rest assured that, when it comes to conveying and interpreting meaning using a natural language, a machine will never fully take our place.