

Google Translation is good for getting an idea of what’s going on and as a start when doing direct, literal translations.There may be better progress in other pairs with more similarly structured languages. I should also mention that the Japanese–English pair is my basis for this. Meanwhile, every other translation tool I know is poor for giving the general idea of most text, and is unspeakable for creating publishable text. But you are a long way from replacing human translation, and a much longer way from replacing source language written from scratch. If you only use Google Translate to point you in the right direction, to give you the gist, it can do the trick probably three-quarters of the time. Where it continues to fail is just about everywhere else, especially for (1) the conversational tone that modern-day marketing copy needs, and (2) the precision and clarity that scholarly writing needs.

This can happen with stock phrases, simple grammar, and in highly formulaic writing such as common scientific methods. Occasionally its translations need little or no correction. This paper’s got Google Translate written all over it. The telltale signs are there: the same word written three different ways, an acronym half the time and spelled out in full the other half, and comically literal terms. I may trial my theory with some experts in areas like chemistry and physics, where I’m quite sure my skills are limited. These are the sorts of things I detail in this article.

If it’s epidemiology, economics, or some sort of population study, I can usually spot the problems quite easily. But in a world of Fiverr pseudo-self-appointed editors and increasingly prevalent open access with lower standards and rapid publication, only the smartest and most ethical clients use a specialist. The seemingly good news for me is that specialist editors should be in higher demand than ever. So, as a consequence, more than ever, Google is spitting out text in which a non-specialist would not spot terminology errors. Also, compared with Japanese, there is only one reading of Chinese characters, which leaves much less guesswork for the machine learning. Perhaps it’s the more-consistent patterns in the language. Perhaps it’s the sheer amount of data being input. Not sure why, because I don’t speak Chinese. It’s actually more cunning than ever because the tool seems especially well-trained for Chinese. Sure enough, it’s exactly what’s on the paper I was sent for editing. I know this because I get the original text and drop it in Google Translate to be sure. More than ever, I see the use of Google Translate. Update in 2021: Since I wrote this in late-2018, it’s really only gotten worse.
