Speed up copy and paste programming

While working at REDACTED, I reported the following issue.

It’s perfectly valid to program by copy-pasting code from one place to another. This looks like a bold statement, but it’s sometimes very difficult to come up with a cleaner alternative. Of course I’m not talking about breaking the DRY rule. For example,

if (x && y && !z) {
    // ...
// later
if (x && y && !z) {
    // ...
// later
if (x && y && !z) {
    // ...

is a break of the DRY rule, and each programmer should replace that with

const condition = () => x && y && !z; // a function guarantees re-evaluation
if (condition()) {
    // ...
// later
if (condition()) {
    // ...
// later
if (condition()) {
    // ...

What I’m talking about is using a block of many lines of code as a template for generating a similar functionality at another place in the application. For example, I could have already programmed some page using two files: some-page.html and some-page.ts. Later on, when I need to program another-page, which is very similar to some-page, I can copy-and-paste the latter and apply the little changes needed to get the former.

This practice is very popular all around the world and there is no shame in doing it. And nothing stops you from writing a real code generator. However, copy-paste-adapt is all you need if you keep adapt at a minimum. How do you do so? Dead easy: refrain from using specific identifiers in code or move them from the code domain to the data domain.

For example, better names for that pair of files would be some/page.html and some/page.ts. This would allow me to copy the files in the some directory to the new another directory to immediately get another/page.html and another/page.ts files which preserve the structure without requiring any adaptation (like renaming).

Simplify translations

While working at REDACTED, I reported the following issue.

We develop in Spain an application in English for Saudi Arabia, how is it that we don’t already have easy to use translations?

Our translations are cumbersome to manage. To add one you have to:

  1. create a hierarchical key like some-module.some-part.some-section.some-title
  2. add its translation in code like this.title = this.translateService.instant('some-module.some-part.some-section.some-title');
  3. open a terminal window at the app main directory, then issue $ npm run translation.export
  4. open a browser, then navigate to a third party website and authenticate
  5. search some-module.some-part.some-section.some-title
  6. select your key in the results (beware that approximate results are shown too)
  7. edit its translation in a dialog box
  8. save the transaltion
  9. open a terminal window at the app main directory, then issue: $ npm run translation.import
  10. reload the page to show the translated title

Why can’t I just add the translated key to a translations file? I guess because the programmers that started developing the app were required to use that third party website for allowing professional translators to do their job. That is a reasonable requirement but there is no need for programmers to continually export and import translations, simply to make them appear on the page they are working on. Translations could automatically be exported and imported at any later time, like when merging changes into the development branch.

It wouldn’t be difficult to write an extraction script to allow coding like this:

// .../src/some-module/some-part/some-section.translations.ts
export const translations = {
    "some-title": "..."

// .../src/some-module/some-part/some-section.ts
this.title = x('some-title');

There is a problem with extraction scripts, though. They are static text analyzers that read code and extract some.key from expressions like instant('some.key'). Thus they can’t extract interpolated keys like instant(`${some}.key`), where some is a string variable whose value will be set later. For example, if some = 'another', then ${some}.key would be another.key.

Interpolated keys are very useful for compressing many code lines into one cleaner expression. For example, this code

let translation;
switch (some) {
    case 'some':
        translation = x('some.key');
    case 'another':
        translation = x('another.key');
    case 'yet.another':
        translation = x('yet.another.key');
    default: // programmer error
        throw new Error(`Unexpected value for 'some' variable (got '${some}')`);

can be compressed to this one liner:

const translation = x(`${some}.key`);

While it’s true that we can’t have both the independence from run time (static analyzer) and the flexibility of interpolations (dynamic analyzer) in the same tool, nonetheless we can easily have both benefits with a static analyzer and a bit of overhead. In fact, all we need to do is to declare all those interpolated keys in a file that only needs to exist. (no need to use it anywhere)

// interpolated-translation-keys.ts

With a file like that, the static analyzer would find it, eat it, and spit out translatable keys, which could eventually be exported. At the same time, the code would work perfectly with interpolated translation keys.

Abandon tailored flows and adopt reusable flows

While working at REDACTED, I reported the following issue.

Our apps are based on Flow classes that define which pages they manage and how those pages are interconnected according to pressed buttons and input data.


  • it’s needlessly complicated, basically due to a misunderstanding of how a highly configurable system is usually implemented nowadays;
  • it’s a layer on top of Angular, but it’s not Angular, which means that an expert Angular programmer must learn it before being productive;

Additional drawbacks of our flow implementation:

  • They are meant to reuse pages but then, for example, a page which is at the same time used for creating and editing a resource is plagued by continuous branching to account for all the little differences between creating a resource and editing it afterwards. (what about components inheritance?)
  • They are meant to reuse pages but not flows themselves, so, given that a flow is associated to a set of URLs which change from creating to editing (continuing with the example above), those two practically identical flows for creating and editng must be represented by two separate classes.

These flows force us to

  1. show page 1 to the user and wait for her to press a navigation button,
  2. after she presses a navigation button, move captured data from page 1 to its flow,
  3. make the flow decide that page 2 is the page following page 1, according to the current state,
  4. move those captured data (and possibly some more) from the flow to page 2,
  5. show page 2 to the user and wait for her to press a navigation button,

which means that these flows force us to write lots of code which is hard to follow, because it’s spread in two different classes and the logic used to determine the next page strongly depends on the state of the current page but the configuration of that state is much more richly documented by the page class (that defines it) than the flow class (that uses it).

Additionally, many flows of ours are fake flows because they only are a means to capture data of complex entities with a very big form split into many pages, connected to one another in a linear fashion. The user can only move back and forth, from start to end. And the only exceptions to linearity are conditional skips.

We could just use a store to preserve state between pages and then

  1. tell pages that they belong to the sequence: page 1, page 2, …
  2. show page 1 to the user and wait for her to press the back or next button,
  3. show page 2 to the user and wait for her to press the back or next button,

While fake flows are justly tailored and not reusable, there are also a few flows like the confirmation flows that are real and should be addressed with reusable flows. In our app, confirmation flows work like this:

  1. On an ORIGIN page, the user selects some documents to act upon, e.g. to delete them.
  2. After the user presses the Delete button, the confirmation flow starts.
    1. The confirmation page opens and lists the selected documents.
    2. The user can unselect some of them. (to not delete them anymore)
    3. The user can open and read some of them. (to make sure she really wants to delete them)
    4. The user can cancel the operation altogether, and get back to the ORIGIN page.
    5. The user can accept the operation on the remaining documents.
    6. If they could all be deleted, the ORIGIN page will show a success message.
    7. If they couldn’t all be deleted, the ORIGIN page will show a failure message.
    8. When the ORIGIN page is loaded again, all documents that the user had selected are still selected (if they appear in the list).

That is a small flow which could benefit from a reusable architecture. Instead of deleting, it could be about accepting or rejecting documents, copying them, archiving them, … whatever.

It could work like this:

originPage.deleteSelectedDocuments = () => {
        selected: originPage.getSelectedArticles(),
        onAccept: [Article, 'moveToTrash'],

but our current implementation of flows doesn’t allow anything like that.