• Fast Setup
  • Articles:
  • SSR part 1. Creating simple SSR application
  • SSR part 2. Migration legacy app to SSR
  • SSR part 3. Advanced Techniques
  • Log Driven Development
  • Localization. True way

Localization. True way

This approach was developed after going a long way from suffering and pain. How I came to this and how to properly localize applications will be discussed in this article.

What is the problem?

There are hundreds of reports on this topic, thousands of articles. So what can I tell you new, you are gonna ask? The fact is that many of these articles have nothing to do with the real world and can only confuse us as developers.

Firstly, I want to tell you a story about how I passed a long way doing localization for one of my previous projects.


Let's get back to 2016. We have developed an application with localization. After reading relevant articles, we made a JSON dictionary with Russian and English translations, where in the depth of nesting we had textual representations of our strings

header: {
leftZone: {
navigation: {
btn: 'Mega button!'

When switching the language, the root component replaced this huge object, and all child components, as if by magic, rendered the correct translation to the user ... Everything worked well, we were very proud of our work until it happened ...

Our customer decided to add support for a third language - Spanish! After a little thought, we copied the English version of JSON, and sent it by mail to the translator.

The translator was shocked to receive the JSON. He didn't know how to work with this format. He didn't have software for editing JSON, no way to check spelling, view changes, etc. Eventually, the translator refused to work with this.


We began to realize that our path was not entirely correct, but there was nowhere to retreat, we decided to see it through to the end, and ... We wrote a JSON -> CSV parser!

CSV format was more friendly for our translator, he was able to work with it. Having sent us the edited CSV, we wrote a reverse CSV -> JSON parser. Everything seemed to be going well, and then ...

The tests have begun! The corrections came. There were many of them. We sent the updated CSV to our translator oftenly and translator sent it back to us. All this went through the mail of our manager. At some point, we lost the current version, all our corrections were mixed into a large pile of different options, which we could no longer understand.

Since the JSON was nested, we got undefined fields due to the version difference, and our application stopped working!

And we invented the wheel! We spent almost a couple of months our team's time on development. What was it? It was a page in the admin area where we could work with the most current locale, changing the translation to a friendly interface. English was taken as the default language. This is how it looked like.

Admin area

It was a Single Page Application that implemented CRUD to work with our JSONs. We could delete, add lines, and update them. There was no spell checker for translators, they had to work on our dev server, after each update we manually saved JSON to our GIT repository.

It was a nightmare! Now, let's figure it out.


First, you need to understand what localization is and what internationalization is.

I18N - Internationalization
It is a set of techniques that make subsequent localization possible, for example: RTL (Right To Left) Support for special input tools

L10N - Localization
This is the adaptation of the application at the component level, for example: - Text translations - Processing of dates, currencies, color palettes

Gettext as a solution!

Forget JSON as a format for translators! We are programmers, we need JSON, but translators have to work with something else.

In order to develop the right localization system, let's see what most translators work with. Many Desktop programs use the Gettext tool to translate.

Gettext is a free tool capable of generating translations based on the default language.


  • It has been around since the 90s
  • Supports many languages, including JS
  • The text in English is used as ID, it is convenient - if there is no translation - we take the English version as a fallback.
  • Support for a git-like merge system. That is, if we have removed or added text nodes in our application, then gettext is able to automatically remove or add them from the dictionary.
  • Support for plural forms in the text.
  • Support for variables within the translated text.
  • Special editor, with support for spelling control, various plugins. It exists for all operating systems and is a standard for translators.

Software for working and creating a dictionary (POEdit):


Source Text is our default language (English). All other languages are translated from the default language. It is also the default language in the application, and if for some reason there is no analogue for a word in Russian in the translation, then we will not have an error, a string in English is displayed.

How it works

1. We need to add the default language to the application.

2. Convert fields with default text to gettext format and send it to the translator.

3. Having received the finished translation, convert it to JSON and insert it into our application.



1. Installation:

npm install @rockpack/localaser --save
npm install @rockpack/compiler --save-dev

2. We need to wrap up the application with a component

import { LocalizationObserver } from '@rockpack/localaser';
class Root extends Component {
render() {
return (
<LocalizationObserver currentLanguage={this.state.currentLanguage} languages={this.state.languages}>

languages is a set of JSONs received after translation; you can load it asynchronously from the server, or integrate it into your application bundle.

Using @rockpack/localazer

In the components of our application where we want to have text localization, we must:

import Localization, { l, nl, sprintf } from 'localaser';

And in the JSX markup code, add


The Localization component is needed to communicate with the LocalizationObserver. Thus, when switching one language to another, we automatically update all Localizaion components.

After that, when forming the dictionary, all values of the l() functions of our localaser will be taken, namely the text "Hello" for its further localization.

Localization with variables

If we need to pass a variable to localization text:

l('Your name is %s', 'USER'),

sprintf passes the variable into the localization text.

Plural forms

For example, we have a counter. With a value of 1, we should display - 1 click, with 2 or more - 2 clicks

To do this, pass the count variable to the sprintf function. It serves to transfer variables to the localized node. The nl method is responsible for choosing a plural or singular form, based on the variables passed to this method.

'%d click',
'%d clicks',


Now we need to extract from our application all the texts involved in translation and arrange them in the form of a dictionary for the translator so that he can create translations for us in other languages.


Let's create a script, separate from our application. Let's call it makePOT.js

The purpose of this script is to go through all the JS(x) files of our application and compose a dictionary of l()/nl() nodes for gettext.

const { localazer } = require('@rockpack/compiler');
src: './src/index.jsx',
dist: './locales'

makePot has properties:

dist: './po',
variables: {
gettext: 'l',
ngettext: 'nl'
defaultLanguage: 'en',

variables - Variables to process. If you use other localaser variable names in the project, you should override

Creating a gettext dictionary

Now, after creating and configuring makePOT.js, we only need to run

node makePO.js

After that, a dictionary with the pot extension will be created on the specified path dist. Subsequently, we see the generated messages.pot file. Which we can open withPoEdit

We see the PoEdit window


By clicking on the Create New Translation button, we will create a new file with the PO extension for the new translation of our application. If this PO already exists, for example, the translator is already working with our dictionary, then makePOT.js will add new or remove unused lines by synchronizing the dictionary with our application.

When creating a dictionary, we have a Translation window, where we can add a translation for this text node. On the right we see tips for the selected language, for example, the word "Hello" is prompted by the program, which can be translated as "Привет".

For plural forms, gettext has support according to the selected target language


For example, in Russian - 1 click - "1 клик", 10 clicks - "10 кликов". We can describe this behavior on our own, I recommend that you learn poedit for more efficient work.

After we complete the translation, we save it to a PO file. It should be in the GIT repository, in the folder with the POT file. In the future, gettext will independently add new or remove unused texts of our application when calling makePOT.js


Let's create a second script at the makePOT.js level, let's call it po2json.js

const { localazer } = require('@rockpack/compiler');
src: './locales',
dist: './src/locales'

po2json has options

src: './po',
dist: './json',
defaultLanguage: 'en'

After that, let's call the script

node po2json.js

Our script will generate a JSON version of the translations.

JSON connection

It remains to connect json to LocalizationContainer.

You can do this in a convenient way for you:

  • Using a request to the server
  • Integrating into a bundle
  • Using localStorage
  • Any other

Just for example:

import { LocalizationObserver } from '@rockpack/localaser';
import ru from './locales/ru.json';
class Root extends Component {
render() {
return (
languages={{ ru }


@rockpack/localazer and gettext are very powerful tools for localizing your applications. With this approach, you will be able to effectively work with the translator using the format of programs that are convenient for him. You don't have to worry about your JSON files being out of date.

License MIT, 2020