From monolith to microservices – to migrate or not to migrate?
Microservices is architectural style focused on the speed of software development defined as the number of functionalities created within a time unit or as duration of the whole delivery process – from concept to deployment (time to market). The current high changeability of business environment fosters increasing popularity of the microservice approach, which forces companies to react quickly in order to avoid the situation when a good solution, but implemented too late, becomes a bad solution.
Most of today’s enterprise class systems have a monolithic architecture. Their indisputable advantage is, of course, the fact that they work and generate income or savings to the companies which own them. However, as the systems grow, a monolithic architecture makes the pace of their development to gradually decrease. Business owners must wait longer for the functionalities they have ordered. To make matters worse, scalability of the software development process turns out to be far from linear. Engaging more people or teams to work on such systems generates fewer and fewer benefits. Introducing new employees takes more and more time, while the existing ones become discouraged and demand extra pay for harmful working conditions or start to ponder a career path outside the organization. Such symptoms clearly indicate that the system architecture has ceased to meet the company’s requirements. Applying an evolutionary architecture, such as microservices, is the best solution to address the issue of an insufficient pace of system development.
To illustrate the effect of using the microservice approach, we can use a certain metaphor. Let’s assume that we would like to build and maintain a space station. In order to do that we will need to deliver various cargoes to the orbit, such as people, materials, equipment etc. At present, the only available form of transport are space flights, which despite the latest achievements of companies such as SpaceX, are very expensive and require time-consuming preparations. Hence, we can try to come up with another solution – a space elevator. Obviously, expenses for building it will be much higher than the cost of a single flight.
However, each subsequent transport will be possible basically right away and free of charge (compared to flight costs).
The metaphor, apart from illustrating a vision of the bright future which microservices promise to us, allows us to come to another important conclusion. Namely, implementing such approach constitutes a huge challenge and requires significant investment. Therefore, before we start to build a space elevator, we should make sure that we need to get to the orbit and will be going there frequently. Otherwise, the whole endeavor will merely be an art for the art’s sake.
Migrate or not to migrate?
Before making a decision to apply the microservice approach, you should consider a few issues:
- if the product (system) is market-proven,
- if the expected pace of product development requires engaging more than one team (~10 people),
- if the system has high requirements related to reliability and scalability or whether they vary significantly for its individual elements?
The moment within the system life-cycle during which the criteria are fulfilled is optimal for taking the decision to use the microservice approach.
You should not forget that the microservice approach has its limits of usability – for instance, it should not be used in case of real-time systems.
Sam Newman, a widely acknowledged author of the microservice architecture theory, has come up with the following definition: “small autonomous services modelled around business domain that work together“
It turns out that the basic building blocks in this approach are the services which we will extract using decomposition by domain (business capability). These services can be developed and deployed independently, but they must cooperate in order to implement a business process.
If the component, which is part of the system, merely stores data, then it is basically a database; if it contains only logic – it is called a function. A service, on the other hand, comprises both these elements – logic and data. Such combination creates a foundation of autonomy which Sam Newman draws attention to. It is worth to have the definition in mind while approaching the issue of system decomposition.
Architectural styles differ from each other in the way they decompose a system into smaller components. A significant aspect of the microservice architecture is the organization of service functionalities around business capabilities, which allows to provide their high cohesion and stability of the established division. Such method of harnessing the complexity of business logic was popularized by Eric Evans under the name of“Domain Driven Design”.It describes the way of dividing the domain into subdomains, and then designating bounded contexts within them which will be used as service boundaries.
A practical technique of identifying bounded contexts is “Event Storming” suggested by Alberto Brandolini. Its first step involves identifying events occurring in business domain. Such approach allows to direct the modelling process to behavior instead of focusing it on the static structure of the information processed. This seemingly subtle change of perspective is crucial for the microservice architecture, because it enables developing a system characterized by loose coupling and huge autonomy of its individual services.
Setting the boundaries of services you should not forget that they will become boundaries of transactions and of strong immediate consistency (ACID). For operations which involve several services, the system will provide BASE (Basically Available, Soft state, Eventual consistency) semantics, which offers the guarantee of liveness, but does not provide safety. Unlike ACID, BASE means that the system will eventually achieve consistency, however neither it is known how such state is going to look like, nor how the system will behave in the meantime. There is a possibility of achieving strong eventual consistency within the BASE model, without traditional mechanisms of controlling concurrency. However, it requires using so called CRDT (conflict-free replicated data types).
Reality is not transactional. Not too often do users require immediate consistency. An example may be finance domain, which as it may seem, should have the highest consistency requirements. Nevertheless, we all got used to the fact that making interbank transfers takes hours or even days and we do not know what happens with funds during this operation – neither can we see it on the source account nor on the target one.
Many times software engineers themselves tend to force immediate consistency where it is unnecessary, and sometimes it may be even harmful. Once I came across a large company management system. While granting the customer a VIP status within the CRM module, the logistics subsystem automatically generated the order to ship a bottle of champagne, which was supposed to additionally emphasize the distinction. The whole operation was performed as a single transaction, which led to the fact that upon running out of liquor in the warehouse, an attempt to order shipment ended up with an error, the whole transaction was rolled back, and as a result you couldn’t set a VIP status of the client. In this case, dividing operations into two separate transactions and using a saga pattern would be a better solution.
To sum up, you should check what consistency guarantees are required for particular functionalities and make sure that the established boundaries secure these requirements.
Introducing the microservice architecture carries a lot of challenges, which should be discussed before the transformation process starts. A system designed this way will require automation of build, configuration, testing and deployment processes. Also, tools for collecting and aggregating logs as well as metrics, and also behavior analysis (tracking, profiling etc.) within distributed environment will be crucial.
At the early stage of the transformation process, it is necessary to determine the way of integrating and coordinating services, data architecture, methods of providing transactional consistency and reliability, configuration, service discovery, as well as other cross-functional aspects. Introducing or changing these solutions at later transformation stages will be much more costly than at its beginning.
The issues mentioned above are so important and broad that they should be discussed in a separate article. Neglecting especially the area of integration may result in the lack of service autonomy and make us end up with a distributed monolith instead of the expected microservice architecture.
Evolution or revolution?
Once we have set the service boundaries, which are going to act as the target solution structure, we must decide how to carry out the transformation: whether are we going to renew the system step by step, by gradually extracting subsequent components, or are we going to create the whole system from scratch and we will put it into service after the whole operation is completed. The second option is certainly much simpler and more tempting, however in most cases unacceptable. In conditions of strong competition and huge dynamics of the market, only few companies can allow themselves to suspend, for a longer period of time, the development of the IT system which their key business processes depend on. If, along with transformation of the architecture, we want to additionally change the technology or a key framework, then the first approach cannot be applied either. In such a situation, we may adopt an approach called strangler pattern.
How can we perform gradual transformation to the microservice architecture?
Let’s take the system with a monolithic structure as a starting point for the process:
The first step is partial logical separation of user interface from the service layer. Handling of the business logic commands is delegated to the service layer, while queries, which support the views, are directed to the database. At the moment we are not modifying the database itself:
The second step is a full logical separation of the user interface:
In third step you have to physically separate the user interface, create API at the backend side, and use it to communicate between these two components:
The fourth step involves gradual extraction of subsequent services. This time we perform full separation – down to the level of database. If the organization had not used the microservice architecture earlier, it would be good to start decomposing with a domain which is small and easy to extract. It will allow the team to gain necessary experience with little risk and in a relatively short time.
The last step is frontend decomposition, which we may also perform in stages – by gradually extracting subsequent user interface elements:
An indisputable advantage of the this process is its evolutionary character. Thanks to it we may gradually change the architecture, without fully stopping the system development, by adjusting the pace of changes to business requirements and available resources.
System architecture does not drift in the void. It is strongly tied with the development process and the company’s structure and culture. The key feature of the microservice architecture is its evolutionary character, which means that choosing this approach will bring the most benefits in a company assembled from little autonomous teams using agile software development methodologies. However, a lack of these features should not stop us from selecting this architecture. We may adopt them simultaneously with the process of changing the architecture, but we cannot completely ignore them. According to Conway law:
“…organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”
Thus, if we introduce the microservice approach in a strongly centralized, hierarchical company, there is a risk that with time our system’s architecture will drift towards a monolith. That is why we should start transforming the system architecture with the so called Inversed Conway Maneuver, that is developing organizational structure which is isomorphic with the expected target system architecture. Once we remove traditional functional silos (frontend dev, backend dev, dba, qa, ops, etc.), and we introduce cross-functional teams, focused around value streams and business capabilities instead, it will be much easier for us to decompose the system analogically, and then maintain obtained architecture.
“Design the organisation you want, the architecture will follow (kicking and screaming).”
Evan Bottcher, ThoughtWorks
Once we have completed transformation process, it is worth checking if we managed to get the assumed result. Whether what we achieved can be called a success. The goal of introducing the microservice architecture is first of all to improve processes related to developing and improving the software. We might measure it by following a few simple indicators, for example:
- duration of the production cycle defined as the average time from concept to implementation (time to market);
- performance of the production process measured as the average number of functionalities (user stories) provided by the team (or per team member) in time unit;
- scalability of the production process measured as a change of performance of the production process in function of team’s size and number of teams;
- average time necessary to locate and remove failure (mean time to repair).
Comparing values of these characteristics for old and new architecture, we might evaluate the effect of transformation performed. The abovementioned indicators can also be monitored during transformation process.
Here, at Altkom Software & Consulting, as engineers, we are fascinated with solutions which allow us to improve the world around us. However, we are aware of the fact that each change is an investment which has to pay off.
Lead Software Engineer