Sourcing… Looking for new providers for the future, for current needs, for a big oncoming project… A lot has been said so far regarding one of the most important tasks in any translation and localization company. Test translations – here we go again (just like David Coverdale and Whitesnake here)…
One crucial part of this process is initial quality control of (usually) a short sample (or samples if a candidate has more than one area of expertise they claim to be strong at) to be delivered by an applicant within a given timeframe and on specific terms. Some people on both sides of language translation company fence have recently tried to say it’s not necessary as the true value will show during the first, often short, project.
I don’t agree with that for several reasons and I treat it as an additional means of qualifying a person for collaboration – the “trust but verify” principle – remember? Contrary to what some of my great industry colleagues say, I don’t see any problem in short tests. I have always treated it as an occasion to show the potential client our value without taking a real-life project risk and give them opportunity to check whether we really are what they are looking for in all the imaginable terms – quality, adherence to instructions, communication, timely delivery and so on. One thing has been bothering me many times when we were awaiting the test results: who checks the test?
At first sight, this is a trivial question as the answer is clear: a native of a given target language with subject-matter expertise and experience in translation, localization, editing/revision, proofreading and preferably in quality control/assurance. Basically, the essential stuff outlined in the so widely-adopted
ISO 17100 standards for the translation industry. Not so fast.
You can always receive a QA form with a “FAIL” written all over it. What the heck – you expected a high-score “PASS”?! So you start to go through the “errors” and no matter how hard you try to see the reviewer’s point, you can’t understand why each and every preferential change is a “Mistranslation: minor”. Then, you see that all “Accuracy: major” are not errors in the first place and don’t fall at all into “Accuracy” category (according to the QA instructions or the LISA QA Standard). Sounds familiar? Wait up, there’s even more.
You see a long-used approved Microsoft term that perfectly fits the context, rejected as a “Terminology error” on very weak grounds of the fact that a new proposition from the reviewer “has already been widely adopted in the target language”. Who did the review? With a bit of luck you can find this information in the file properties. There it is! You google the person behind your “failure” and you can see that their experience is not impressive, they accept any job in any field of expertise that is offered to them, to name only a few.
It may happen that they don’t even meet the basic EN 15038 standard criteria. In short: it is you who should be qualifying them, not vice versa.
So what can you do here? It’s simple – be professional. As always. Respond with a detailed rebuttal, quote your sources and show where the reviewer’s arguments are invalid. Point out their qualifications, showing why they shouldn’t check your sample as there’s no common ground on which their qualifications and expertise could compete with yours. Be polite but be consistent. Perhaps, for various reasons, you will lose this very client or this very project but as your reputation is at stake (yes, people talk to people and tell them different things), show that it is not you who should fall out of the roster.
Good luck and keep your fingers crossed!