Breaking the monolith – How to design your system for both flexibility and scale – Part 5: The Common

This post is part of a series. you’d probably like to read it from the beginning. Or check out the previous post in the series

One of the ways to boost performance is ruthless consolidation and parallelization of your critical sections (these are the parts of you system that does the heavy lifting). For that to happen you need to figure out a rule of thumb for each of those “heavy lifting” processes or what actions are common to the processing of input you may encounter.

Taking that knowledge, we would like to craft a new computational solution that is general enough to handle all those shards of data we receive as input and pump them into a molding process. This molding process will filter, translate, enrich, envelope, sum or convert (etc.) those shards into a form that could be easily processed by a parallel (and hopefully stateless and/or asynchronous) computation. It is common that these preparation processes to use the “adapter” design pattern, but that is really up to you and the specifics of your problem

Looking back to projects I have done, there are a lot of actions that are common to critical parts you will probably gain from moving to those preparation processes, my top five are:

  1. Filtering – of entire messages or specific fields.
  2. Enrichment – all interaction with a high latency data source .
  3. Translation and conversion
  4. partial sums and buffering
  5. estimation and data minimization – using a simpler “softer” (= guessing) process to make the “hard” (= still guessing, but now in denial) decision that much easier.

There are of course many others, and most of these can be ran in parallel themselves. Lets have a look at some of the other kinds of commonalities between processes we could have:

  1. Operational – working in the same manner, but possibly in a different context.
  2. Functional – doing roughly the same job, maybe on other sets of data.
  3. Superficial – have close or similar API. working on the same inputs. this may imply 4,5 or both.
  4. Circumstantial – invoked or caused by the same event in the system.
  5. Synchronal* or Coincidental** – works, or may work in parallel (though not necessarily doing the same job, or have the same input).
  6. Intentional – transforming different inputs into same outputs or having similar results (sometimes seen as a private case of 1 or 2) .

Again this list is partial, and mileage may very depending on your flavor of software engineering. However, what you ought to take from this it that these are markers, faint clues on the system map – that here may lay a performance treasure. Grouping those similar processes and wrapping them with the appropriate preparation processes  can drastically minimize the code base, removing the chance of bugs, and enabling you to maximize general purpose, optimized, libraries.

So go to your calendar right now and schedule an hour. Take this hour to have a look at your system architecture – find those commonalities, those critical sections and those preparation processes and see what can be generalized and simplified further, and what can be shifted earlier in the flow, making your system more state independent and more asynchronous.

 

This one was a “short and sweet” type of deal, but next time we will take a plunge into the world of microservices, service architecture and try to answer the age-old question of why SOA is such a dark, dark term.

 

*yes, It’s a word. In English. Google it

** I don’t like that one. It’s English, but.. you know..

Adam Lev-Libfeld

A long distance runner, a software architect, an HPC nerd (order may change).

Latest posts by Adam Lev-Libfeld (see all)

Breaking the monolith – How to design your system for both flexibility and scale – Part 5: The Common

Leave a Reply

Your email address will not be published. Required fields are marked *