Introduction
As of August 2018 we’re happy to say that Squarespace’s all-in-one platform is now available to members and visitors in English, French, German, Spanish, Portuguese, and Italian. It’s hard to believe that only a few years ago providing an internationalized product experience was nothing but an uncertain bullet-point on our product roadmap. We knew we wanted to find a way to better serve our customers from all over the world but the engineering challenge was huge: how were we going to translate a 10-year-old codebase?
As the owners of the front-end build process, the Interface Architecture team was tasked with architecting a solution for our large and constantly changing front end. More than a decade of product development results in a lot of JS: at the time we were looking at over 480,000 lines of code in 7,000 .js/.jsx files, which were under active development and changing every day. With limited internationalization experience across the members of our team, the challenge was both daunting and exciting. It was an exercise in constantly learning, revisiting our assumptions, and accordingly refactoring.
As we made progress on a solution, we learned that creating a reliable translation system was not limited to the technical problem of extracting and translating strings. A reliable translation system would consider the holistic requirements of all of the stakeholders who were involved in delivering an exemplary internationalized product experience: engineers, product managers, language experts, translators, QA analysts, etc. Through lessons learned from five site language launches, we’ve arrived at the system we have today: a system which is responsible for reliably translating over 100,000 strings across five languages.
In this blog post I’ll describe the system we built for delivering front-end translations at Squarespace. I’ll describe how translation code is written, extracted, and translated, the rationale behind some of our architectural decisions, and some of the functional and internationalization-specific lessons we learned along the way.
How translations work today
The lifecycle of our front-end translation system can be split into four main steps.
1. Engineers create translations in code with translation helpers
“Translation helper” is a generic term for a function that wraps a string that should be translated. For example, to render a presentation string in vanilla JS, an engineer would use our t()
function:
t('Hello ', { name: 'Dan' }, { notes: 'Saying hello', project: 'hello' });
Alternatively, the same string in a React/.jsx file would utilize our <T>
component, a translation helper specifically for React.
Translation helpers have four main components:
- The string copy, with dynamic substitutions wrapped in curly brackets.
- A substitutions object, which maps substitutions in string contents (like
name
) to variables resolved at runtime. In React substitutions are passed as props. - An optional
notes
argument, which is free-form text that provides context to translators about a given string. - A
project
argument, which is used for logically grouping translation strings and routing them to extraction files. More on that later.
We provide translation helpers via internal npm packages for both vanilla JavaScript and React, including helpers for plural values, numbers, dates, units, and money. Many of these helpers are powered by globalize.js, a popular open-source JavaScript library that formats JavaScript to conform to specifications defined in the Unicode Common Locale Data Repository (CLDR). The Unicode CLDR is the international standard that defines how different sets of strings are resolved in different locales, including dates, time zones, numbers, and currencies.
Once they’re ready to have their strings translated, engineers commit their code with translations directly to the master
branch of our main Git mono-repo.
2. Translation strings are extracted to English YAML files on a translation branch of master
After wrapping the relevant English strings in translation helpers, those strings must be extracted from the codebase into the file format acceptable by our translation service, YAML. We’ve dubbed the process that runs this translation extraction the Unified Translation Workflow (UTW).
We have a special git branch – i18n/translation
– that is kept up to date with our master
branch. UTW runs on i18n/translation
on a weekly schedule and is triggered by an npm script.
The script does the following:
- The latest
master
commits are rebased ontoi18n/translation
. - UTW is run on the
i18n/translation
branch by an npm script. - UTW consumes and transforms all of our front-end source code into Abstract Syntax Trees (ASTs). It operates over these ASTs to identify, validate, and parse all of the translation helpers in our source code at the current HEAD commit hash in Git.
- After parsing a translation helper, UTW writes its string content and any associated metadata to an English (en-US) YAML file. It uses the helper’s
project
attribute to determine which YAML file to write to.
To better understand the routing behavior, let’s assume we have the following code in greetings.js
:
const helloMsg = t('Hello ', { name: 'Dan' }, { notes: 'Saying hello', project: 'hello' }); const goodbyeMsg = t('Goodbye ', { name: 'Dan' }, { notes: 'Saying goodbye', project: 'goodbye' });
UTW validates and parses each t()
and looks at its project
attribute. Based on these values, it determines that the translations should be routed to hello.en-US.yaml
and goodbye.en-US.yaml
respectively:
hello.en-US.yaml # Saying hello | source: ./greetings.js 48caf4b6b3b6a39e0f6b8e1da795e06c: "Hello "
goodbye.en-US.yaml # Saying goodbye | source: ./greetings.js a062d0b5705d5c44970846af9084adf5: "Goodbye "
YAML files are collections of key–value pairings. We generate a unique key
for each instance of every string as a hash of its value and properties, regardless of whether its string content is the same across files. In other words, if the English string “Okay” appears across the codebase, each instance of “Okay” will be extracted with its own unique key.
We also provide metadata to translators for each string via YAML comments. Metadata includes the file location of a string (useful for hunting down a problematic translation) and the contents of the notes
attribute. Providing useful context in code significantly improves overall translation quality and reduces QA time.
The UTW extraction flow has a few key benefits:
- Defining
project
at the string level allows engineers to route logically grouped strings to the same extraction file, regardless of where those strings appear in source. Not only does this allow for the easy prioritization of a set of translations—say, for a key product launch—but it also makes our extraction files agnostic against file movement. If you rename or relocate source files, the extraction files remain unchanged. - Since we always generate translations on a branch that mirrors the latest commits to
master
, the state of the source code at that Git HEAD commit hash serves as the source of truth for the state of our translations. If we want to recreate our translations at any point in time (say, for debugging purposes) all we have to do is checkout our repository at a specific commit and rerun the UTW extraction. The state of our code is the state of our translations.
3. English YAML files are sent for translation via Idioma and returned to the translation branch
In a previous blog post, Xiaomeng Chen introduced Idioma and Smartling. Idioma is an internal tool built at Squarespace that serves as the transport mechanism between our source code and Smartling, the third party service that translates our strings. Idioma watches the state of our i18n/translation
branch. When it detects changes in en-US YAML files, it automatically coordinates sending those files to Smartling for translation.
Idioma polls Smartling to detect when translations are complete in our target languages. It pushes any translated YAML files back to i18n/translation
.
4. Translated YAML files are merged to master and built into production
To get translated YAML files into production, we merge the translated YAML files from the i18n/translation
branch back into master
. Our production build consumes and loads these translations using a custom webpack loader plugin, so when Squarespace users visit the site under a different language, they’re presented with a fully translated version of the site.
Lessons learned
As a greenfield project, the road to the stability of our translation system was paved with some initial failures and interesting lessons learned. We’re hoping that sharing some of these lessons helps inform the roadmap of anyone facing a similar task.
Maintain the translatability of dynamic strings using substitutions
Strings generally fall into two buckets: static and dynamic.
A static string doesn’t have substitutions and is straightforward to translate:
Choose a widget
Select a template
View our privacy policy
Dynamic strings, composed of substrings that are resolved at runtime via substitutions, are more complex but often straightforward:
You have {numberCredits} credits remaining
Hi {name}, your birthday is on {birthday}
When viewing strings in their translation UI, translators see dynamic strings with the substitution names in place, exactly like the two strings above. Descriptive substitution names allow translators to move variables around to match the semantics of the target language.
The correct use of substitutions allows you to avoid imposing English semantics on your strings. For example, take the following naively written code snippet:
const color = t('brown', null, { project: dogs }); const size = t('big', null, { project: dogs }); const dogDescription = 'The ' + size + ' ' + color + ' dog.'; alert(dogDescription);
The correct translation of “The big brown dog” in Spanish is El gran perro marrón
—the word for brown, marrón
, follows the word for dog, perro
. However, if we were to render the above snippet in Spanish, dogDescription
would resolve to El gran marrón perro.
This code structure doesn’t provide a translator the flexibility to compose the string appropriately.
Using substitutions allows translators to move variables accordingly:
const dogDescription = t('The dog', { size: 'big', color: 'brown' }, { project: 'dogs' }); alert(dogDescription);
A translator would be able to return the correct string—El {size} perro {color}
—which would correctly resolve to El gran perro marrón
on screen.
Evangelizing the importance of substitutions across the organization is more art than science. We use a combination of code reviews, an internal documentation site, and presentations to ensure that engineers understand the implications of their translation helper composition on the quality of the resulting translation.
Optimize for making translation changes and make all strings uniquely identifiable
Earlier, I mentioned how we uniquely keyed translation strings in the extraction YAML files. For example, looking at the example YAML translation file below, despite the fact that the entries share the same English value (“Okay”), they are uniquely keyed:
updated.en-US.yaml bc1888e287ff1135d5c012bb61b76f14: "Okay" 3f7fee19e6f1e7a9aa99e64edfa0d010: "Okay" bf658e8b81d0bd929b3bceafb14fdba9: "Okay"
Originally, we didn’t uniquely key each string instance if it had the same string content. Instead we defaulted to keying by value:
old.en-US.yaml "Okay": "Okay"
This meant that by default we treated three separate instances of the word “Okay” in code as a single translation. It was intuitive at the time that the same word in English should translate to the same word in other languages. Why treat them separately?
The problem with grouping strings by value is you lose the ability to translate the same string content differently depending on the context in which it appears. In other languages, “Okay” might not translate to the same word based on its enclosing UI. On some interfaces it may be more appropriate to translate “Okay” to “Yes” or “Approve” or some other value. Grouping strings by default removed the ability to exercise that nuance.
Fortunately, we had built-in a mechanism to disambiguate string instances of the same value. An engineer could locate a problematic string instance and apply a manual key
attribute to it in code, resulting in the follow updated extraction:
old-with-manualKeys.en-US.yaml okayForModal: "Okay" okayForConfirmationDialog: "Okay" okayOnWelcomeScreen: "Okay"
The extreme downside to this was that this disambiguation required a code change. When a translator, product manager, language expert, or QA analyst saw a string on the production website that needed disambiguation, they couldn’t perform that disambiguation themselves. They needed to coordinate with an engineer who could find and modify the translation helper in source.
Translation updates weren’t only limited to string disambiguation. We also realized translations would sometimes break UIs, as some short words in English had much longer translations in other languages. As we expanded to more languages we ended up needing to disambiguate or modify strings constantly. This became a time sink across the organization, and the coordination overhead dramatically extended the time to fix incorrect translations in production.
Our original workflow optimized for the wrong thing. By defaulting to keying by value and grouping similar strings we figured we could save on upfront translation labor, but that came at the unintentional cost of extending the time to make translation fixes. We found that the upfront cost of string translation is low; translating a list of individual strings is relatively straightforward. Translating cohesive UIs is a much more complex problem, and in practice our translations required multiple iterations to feel right in other languages. The ability to modify problematic translations quickly became the most important workflow across the organization.
Generating unique keys for each translation by default and UTW’s full-code extraction helped us remove the need for engineering intervention to ship translation fixes. As a result, our subsequent languages launches have been much more organizationally efficient and less error prone.
Automate full-code translation extraction, and the source code is the best stateful source of truth
The initial iteration of our translation extraction process was a script that engineers would use to manually extract translations and generate YAML files over their own source files. The thought was that engineers should be responsible for managing their own translation extractions like any other asset. If they added or deleted strings in their code they were expected to update their translations accordingly.
Individual engineers generating their own translations became a problem for several reasons:
- Without a way to know what files already had their translations extracted, there was no way to prevent two different engineers from extracting strings from the same files.
- For similar reasons, there was no way to know which files did not yet have their strings extracted.
- If a translation was updated in code an engineer would have to remember to re-extract it; otherwise, the updated source content would never make it to translators.
The fallout from these drawbacks was that we never had any certainty about the translation state and it made errors almost impossible to debug. Over what files was a given translation YAML file extracted from? Were our translation YAMLs up to date? We couldn’t even be sure our extracted strings still existed in source code.
UTW was a months-long re-architecture project and a direct response to these problems. Performing full-code extractions allowed the source code to be our stateful source of truth. We could now ensure that the extraction occurred over 100% of our source code every time it ran, with no duplicates, and any additions, updates, and deletions to source code would always be reflected in the extraction files. Recreating a given extraction was as easy as checking out a specific Git commit hash and rerunning UTW, which tremendously simplified debugging translation errors.
Provide useful metadata to translators
Translators perform translations on the YAML files generated by our tooling. Originally they had no more information than the string itself, making it difficult to translate strings to be contextually appropriate. No contextual information about where a string appeared in UI meant many incorrect translations and frequent revisions.
homepage.en-US.yaml # Header at top of screen | source: ./homepage/header.js 29606e15e92ec2f6ef7ca287de476e12: "Welcome to Squarespace" # | source: ./homepage/main.js 4232d07bfa1e2f0c2a50eec8200bc273: "Create a site" # | source: ./homepage/main.js 99bd0ac936bd0e6489b402e77e951832: "Start a trial"
Today we provide more contextual metadata to translators to help mitigate this problem, passed as a YAML comment above a string entry. We pass both the contents of the notes
attribute and the path of the file containing each string, which translators can view in their translation UI. Eventually we’d like to explore the potential of providing visual context—a screenshot capture of a given string—to provide translators with even more information.
Pluralization is tricky and unintuitive
Pluralizing strings in other languages is surprisingly complex. Our plural implementations are based on the globalize.js plural generator, which is itself based on the plural rules defined in the CLDR.
Pluralization is essentially returning the correct form of a string given a number value. Different languages have different rules around which form of a string to return, which can be particularly unintuitive for someone who’s only used to rendering a site in English. These rules are defined in the CLDR.
Here’s an example of writing a pluralized string via our pluralize()
translation helper:
const numDogs = this.dogs.length; // With subs and translator notes const dogsMessage = pluralize( { one: 'There is a dog', other: 'There are dogs' }, numDogs, { color: 'brown' }, { project: 'dogs' } );
pluralize()
has the following function signature: pluralize(formsObject, numberValue, substitutions, project/note attributes)
.
Based on the CLDR, pluralized strings in all languages can be in one of six plural groups, represented by six keys: zero
, one
, two
, few
, many
, other
. The globalize.js plural generator is a function that resolves a number value (numDogs
) and locale (en-US
) to the correct group. In the case of the code above, that means that pluralize()
reads the numeric value of numDogs
and determines whether to use the one
or other
string, the only plural forms that apply to English.
If numDogs
equals 1, then pluralize()
will return the one
form of the translated string: There is a brown dog
.
If numDogs
is equal to anything other than one, pluralize()
will return the other
form of the translated string: There are (numDogs) brown dogs
.
The %n
is a special token we provide that resolves to the plural number value.
numDogs number value | Plural group | dogMessage string |
0 | other | There are 0 brown dogs. |
1 | one | There is a brown dog. |
13 | other | There are 13 brown dogs. |
100000 | other | There are 100000 brown dogs. |
The CLDR’s documentation on language plural rules illustrates how complex plural resolution is across languages. Some languages like English only use two cardinal plural groups, whereas other languages like Arabic use all six. French, despite the fact that it uses the same two cardinal plural groups as English, has different rules around how those groups are resolved (the one
form is used for 0, 1, and decimals 0.0–1.5).
Fortunately, libraries like globalize.js abstract away much of this complexity, but it was important as a team to understand these details so we could evangelize best practices. It was particularly important to understand plurals as we increased our number of supported languages, as each language presented its own unique pluralization rules.
Don’t fear necessary, codebase-wide changes (but test them thoroughly)
At one point in our project, we faced a critical problem: we needed to make backwards-incompatible updates to all of the existing translation helper function signatures. There were two confounding factors:
- We were already shipping over 15,000 translations in production.
- We needed to make breaking changes on an active codebase, meaning engineers were adding, updating, and removing translations on a daily basis.
How could we update ~15,000 translation helper call sites while ensuring our changes wouldn’t negatively impact development or production?
To update all 15,000 call sites we used the power of jscodeshift, a codemod runner written by Facebook. jscodeshift works by finding and parsing files in a set of paths to their JavaScript ASTs, applying transformations to those ASTs as defined in a codemod script, and writing the transformed contents back to those files as JavaScript code. Our codemod would detect a translation helper, modify its arguments to match its new, desired function signature, and write the updated helper back to its source file.
We did a lot of testing to build confidence in our codemod and its output:
- Codemod: an extensive suite of unit tests.
- Source code: a script to validate the diff of the source code pre- and post-codemod.
- Translation output: a script to validate the post-codemod translation extractions to ensure they were equivalent to the original translation extractions.
After iterating on the comparison scripts for several weeks, we had full confidence in the safety of our changes and in shipping our changes to production. Updating all the call sites was a matter of running the codemod and checking in the updated source code in one commit. That one commit resulted in 40,000 lines changed across 1,100 .js/.jsx files.
Conclusion
Designing and implementing a system for front-end translation was an exercise in continuous learning about engineering pain points, internationalization standards, and QA and translation workflows. Although we made some initial incorrect design decisions along the way, they were valuable in exposing technical and procedural obstacles that we couldn’t have known about at the start of a greenfield project.
Through these lessons, we’re proud to say that our current translation system has reached a level of stability where we can reliably launch products in new languages with limited engineering interventions. But our work is far from over; as we add new languages and product offerings, we’re constantly evolving and revisiting our decisions about our processes, translation helpers, and build system.
If you’re interested in solving problems like these at Squarespace, we’re hiring!