Provide a company’s programmer’s guide

While working at REDACTED, I reported the following issue.

Create, maintain, and recommend the usage of a company’s programmer’s guide. It would be better suited by a Q&A (question and answer) website, with syntax colored snippets, with comments, with votes, with reputation, very much like an on the premises StackExchange website.

It would answer questions like: How do we manage money data type?

This is a real question I had to ask at some time, but both Front end and Back end folks couldn’t give me a definite answer.

In Back end they use something called BigMoney, a class with lots of attributes, and they document it like so in the Swagger website, but then the endpoints exchange a string with some decimal number and some currency in it, while at other times, they exchange a POJO with an amount string and and a currency string.

Even if I was surprised by the incoherences here, what suprised me more was that nobody was able to give me a definite answer. With a company’s programmer’s guide there wouldn’t be any doubts.

Of course, not being able to answer such a question meant that they also couldn’t answer related ones, like How do we manage precision in money computations?How do we manage money localization?

Some days after asking my question, and making them notice that I considered it unacceptable for a banking software company to not know the answer, my Front end colleagues hacked up a “solution” into an old input component.

The new money functionality had been made available using a generic text input component, with a special amount type, with a required amount validation, and with a couple of additional ugly quirks:

when you want to set a money value, like USD 12.34, you have to manually erase the currency in it before;

when you want to get a money value, like USD 12.34, you have to call the special getAmount instead of the usual getValue.

That would be juniorship showcase if code review had fixed it. Instead it got into the code of the UI Components module. I mean, juniors have some right to make certain mistakes, but seniors?

It would recommend basic best practices like:

  • avoid completely useless comments like this:
// form
form: Form;
  • avoid useless comments:
// search form
form: Form;

using self-documenting code:

searchForm: Form;
  • add useful comments, when your self-documentation doesn’t cut it
// We put searchForm here because we want to offer
// our users alternatives and upsells that take into
// account all their previous choices

Allow integrations with pluggable services

While working at REDACTED, I reported the following issue.

It’s quite stressing to bounce from one broken integration server to the next.

  • I started developing against the official OpenShift integration server, then against some machine directly managed by the supervisor of the Back end engineer with whom I was working, then against this engineer’s own machine, and then changing from one to the other as the wind blowed.
  • Each time, except during short periods of less than a day, a problem arose. Now a required service is not running, now there are no data to show, now the current user does not have needed permissions, now you have to pass groupName to the endpoint instead of name, and on and on.

There are many important features that a sane integration server should have.

  • minimize deployment time of any service to a bunch of seconds
  • maximize server’s robustness by allowing access to all necessary services
  • offer both real and mocked responses (given that Back end knows when to use the former or fall back on the latter)
    • mocked responses would have a special header listing mocked endpoints used
      • exampleMocked = ['ENDPOINT-1 ID-X', 'ENDPOINT-2 ID-Y', ...]
        • Why the IDs? Because they would allow mocked workflows. (see below)
    • requests would always allow a special header to get specific mocked responses back; mocked responses would be documented elsewhere by Back end, and their ID would allow the Front end to mock flows
      • exampleMocked = ['ENDPOINT-3 ID-Z']
        • Use case: it’s possible to mock a search / results / details workflow with the following two mocked requests:
          1. Request: Mocked = ['bookResults forAuthor_JKR'] to run after the user clicks a Search button;
          2. Request: Mocked = ['bookDetails forBook_HP1'] to run after the user clicks a book title.
    • thus:
      • a request with a mocked header would always get a mocked response with at least as many mocked services as requested.
      • a request without a mocked header could always get a mocked response, according to services’ availability.

Each Back end programmer should never program locally but against a virtual private server. He would be able to

  • generate a new clean instance of a server from a preconfigured official image,
  • push to it the service he is developing and its changes, as many times a day as needed,
  • connect his server to the load balancer hub without any time consuming tasks,
  • setup autoreboot after a crash, also using a previous version as a fall back.

That setup would allow Back end programmers AND Front end programmers to smoothly do their job. By making the Back end a virtual API, since the inception of a service, by means of mocked responses, both teams of programmers can work independently from each other because the virtual API keeps running even when real services are being developed, fixed, or maintained.

Serve both raw data and populated documents to Front end

While working at REDACTED, I reported the following issue.
  • Back end programmers tend to provide raw data only.
  • Front end programmers can retrieve data from many endpoints.

Shouldn’t we then use the browser to aggregate data?

No, because if there is a reason for having interfaces between different systems it is to hide their inner workings and offer more coherent, more abstract, more complete and, in general, easier to understand operations and results.

The Front end side needs both aggregated and raw data fron the Back end.

For example, take a search results page for authors’ profiles based on their language.

  1. A language select box needs to show EnglishSpanish, … (raw data) for the user to search one
  2. The resulting profiles need each to show the number of authored documents, and their titles (populated documents)

Of course I could use three normalized endpoints:

  1. getLanguages() for the options in 1.
  2. getProfiles(page) for the results in 2.
  3. getDocuments(authors) for the titles in 2.

but that would

  • add a call from Front end to Back end, causing unnecessary slowness for the end user;
  • add knowledge to Front end about Back end implementation details;
  • add complications to Front end, just because.

Those complications are:

  1. get the authors from the profiles in the current page;
  2. asynchronously get the documents from getDocuments(authors), taking care of possible errors;
  3. then group the titles and the count of those documents by author;
  4. then make those additional data appear in matching rows on the page.

Could the operations above be performed in the Back end? Not only they could, but they would be easier and faster there, because getDocuments would not be asynchronous.

Instead, a Back end engineer told me he wouldn’t populate a response because it would set a precedent and all other Front end engineers would ask the same at any later time. Another Back end engineer said that long ago they had foreseen the need for a data aggregator but lacked will power to actually program it. (I couldn’t believe these guys!!)

Data population is so common a concept that many Back end APIs provide ways to get only selected properties. For example, there could be a better endpoint like this:

getProfiles(page, { 
    populate: ['documents', { 
        populate: ['title'] 
    }]
});