Machine translation has never been better. As major tech companies like Meta continue investing in it, there’s every reason to believe that machine translation will only become increasingly accurate as time moves forward.
But they’re still not perfect. And one cognitive scientist, Douglas Hofstadter, has devised a comical experiment to prove it.
A Silly Experiment
Hofstadter is a faculty member at Indiana University who has had a lifelong passion for the Swedish language. After years of formal and informal studies with friends and teachers, he’s developed the ability to speak at a reasonably high level.
Given his love of language and learning, Hofstadter says that he’s enjoyed discovering the weaknesses of machine translation technologies as they’ve improved over the past few years.
That led him to wonder, what would happen if you fed a sophisticated machine translation service a paragraph of complete gibberish? He set out to find out.
The idea was to write a paragraph in Swedish that was total nonsense, yet coherent enough from a grammatical perspective to make sense to a machine translation algorithm.
There were two purposes for this little experiment. Hofstadter wanted to see what would happen. In other words, how would the machine translation tool deal with the fact that the content was gibberish but it made grammatical sense?
Additionally, he wanted to see whether different machine translation tools would produce different results. That is if he fed the same paragraph of gibberish into Google Translate, would the English translation be the same if he fed it into other translation technologies?
Hofstadter put his paragraph of Swedish gibberish into Google Translate, Baidu, DeepL, and several other translation tools. He found that all of the tools translated the paragraph as if it made real sense, even though it had no meaning.
This is what Hofstadter himself had to say about the results:
Of course, none of the three machine-produced paragraphs has any meaning whatsoever, but the systems aren’t aware of that flagrant lack. This is because they have no notion of what meaningfulness and meaninglessness are. They are not thinking while translating; they are just doing very complicated but knee-jerk reflex operations with pieces of text.
What’s the Point
Amusement aside, you may be wondering what the point of all this is. There are two things we can learn from Hofstadter’s experiment.
First, it’s a reminder that machine translation algorithms only follow a set of complicated rules when performing translations. They don’t actually seek to understand or generate meaning from whatever is inputted into them.
Second, Hofstadter found that the English translation of his Swedish gibberish differed based on the machine translation technology that he used. This shows that – at least in edge cases like this – machine translation tools aren’t as uniform as you might expect.
It’s a reminder that, although machine translation can be quite helpful in minor situations, it’s still not perfect. Relying on it too much could create strange scenarios like the one we’ve covered in this article.