The Bureau Works Blog

THE NEW GLOBALIST

Automation: How Bureau Works Harnesses It For Better Translation Quality

Jun 10, '19 by Travon Varnado

 

Bureau Works’ robust localization management platform and the automated client-translator matchup feature push the expectations of what can and should be automated to new heights. Bureau Works is setting the example of just how well automation technology can do tasks that are typically relegated to and underexploited by human intervention. 

Industry Norms Aren’t Always Best Practice

AutomateAcross different translation management system (TMS) platforms, automation is an essential feature that users at different levels (i.e., administrators, project managers, translators, reviewers, clients, etc.) utilize to streamline different steps of a particular translation/localization workflow. The quality, complexity, and fluidity of file management at each step of a translation/localization workflow is affected in different ways depending on the automation capacities of the TMS in question. After having used multiple well-known TMS platforms, such as SDL WorldServer, GlobalLink, Memsource, Lingotek, and MemoQ, it’s apparent that the norm for automation steps is to tediously configure them during the creation phase of a project.

Generally, an administrative user (such as a project manager or client executive) predetermines the characteristics, people involved, parameters, and necessary steps for the TMS to follow at each transitive step of the project’s workflow: this is the common approach to automation among the other, aforementioned TMS platforms. Within this framework translators, reviewers, and steps for quality assessment are determined manually by the project creator and cannot be altered once a project is initiated. A chosen pool of potential translators for a particular project are simultaneously notified of the project once it is initiated, and the first translator to accept the translation position for the project is assigned the role; once their translation task is completed, a particular reviewer is chosen from a predetermined pool of potential reviewers in the same manner. This process of predetermining a pool of desirable, potential translators and reviewers is standard but not necessarily best practice for assuring that the best linguists are picked for a specific project.


Requirements differ from one project to the next, just as linguists’ capabilities and productivity differ. It seems appropriate to maximize the alignment of project requirements with the capacities of a translator to achieve the best fit. From this vantage point, precedent is key; linguists should be evaluated across various measures with respect to their prior translations. Yes, project managers and/or other administrative TMS users can and do establish their own criteria and metrics for choosing the best suited translators for any given project. However, over the course of months and years, translator effectiveness must be perpetually reevaluated with respect to the established criteria and metrics for different project types in order to maintain accurate, up-to-date data on translator effectiveness for all team linguists. Manual human effort in this evaluation process is needless, laborious, time- and resource-consuming, and could ultimately be detrimental to the quality of translations if the project-translator matchup is compromised.

How Bureau Works’ Automation Uniquely Advances Quality

The Bureau Works team, in its commitment to consistently produce quality translation, took note of the natural propensities for human error in this matter of complex and dynamic linguistic talent evaluation and did not leave it to chance. Instead, the Bureau Works platform utilizes a unique, rules-based robot capable of continuous dynamic tracking, logging and evaluation of translator talent. Furthermore, instead of focusing on project-translator matchups, this platform focuses on client-translator matchups (the latter is much more logical than the former because clients typically have a specific range of translation requirements to satisfy that only vary slightly between projects). The Bureau Works robot takes the responsibility of finding the most compatible linguist(s) for a client through its autopilot feature: the robot searches and filters for onboarded translators who’s tags (i.e., user-associated credentials and verified capacities that are determined during the translator onboarding process) and productivity history matches the tags and productivity needs of the client. The Bureau Works platform utilizes it tagging system for all of its clients and translators and they are dynamically matched over time as translator productivity is tracked and compared. Matching lists of finite tags is a manageable job for human effort, but productivity tracking is where it gets tricky and requires a lot more intelligence; algorithmic intelligence in this case.

For each project, the Bureau Works Quality Management module determines a percentage of change for a translation after it has been edited at the review step of the project workflow. Change is measured across many quality metrics: a ratio of change is determined for terminology inconsistency errors, incorrect translation errors, grammar errors, and fluency errors, to name a few. A percentage of changes score is then given to that particular translator’s work for the project in question. Infrequent modifications at the review step produces a low percentage of changes, reflecting high accuracy and productivity for that particular translation.

BWX quality report details

Here, automation is ingeniously harnessed to execute the client-translator matchup. Human error is mitigated through the collection of data which the robot continuously compiles for each translator. The robot then algorithmically uses this data as the basis for complex, holistic comparison between translators. Each translator has a consolidated profile, akin to the one below, with averages for all metrics and a combined average ratio percentage of change. The translator(s) with data profiles that best fit a client’s needs are automatically chosen and paired for work on future projects. (Of course, other factors such as target language needs, translator availability and domain-specific knowledgeability influence the matching process.) Bureau Works’ automated translator selection process maximizes the quality and productivity output of the translation workflow in ways that other TMS systems do not by saving precious time and mitigating the risk of compromising quality in the traditional project-translator matchup process.

The Supporting Foundation

Bureau Works’ localization management system environment is the infrastructure enabling the automated selection process to function as it does. It’s the robot’s playground, and, as such, its role is critical in assuring that translation quality and productivity are not compromised. Within it, translatable text of all accepted types are dissected and segmented consistently. Metrical evaluation of translator effectiveness is only reliable if all translators’ works are dissected similarly and assessed using the same means. Bureau Works’ localization environment promotes fair and consistent evaluation for all translators because it evaluates their translations using the same segmentation rules and weighted measures across languages. Translation error ratios and change percentages are thus highly reliable in predicting quality and productivity outcomes for different client-translator matchups.

BWX Quality Change Type Distribution

Together, Bureau Works’ robust localization management platform and the automated client-translator matchup feature push the expectations of what can and should be automated to new heights. Bureau Works is setting the example of just how well automation technology can do tasks that are typically relegated to and underexploited by human intervention.  

Topics: Technology

Travon Varnado

Written by Travon Varnado

Travon is M.A. candidate for the Translation and Localization Management program at the Middlebury Institute of International Studies at Monterey. When he isn’t learning about the latest and greatest in localization, he’s working on creative visual artwork.

Keep your finger on the pulse of the localization industry - technology, innovation, people.

Follow Us

LinkedIn  Facebook  Twitter

Recent Posts