In 1967, Melvin Conway published a paper that asserted system design often reflected the communication structure of the organization building said system. This often meant that larger co-located teams developed monolithic systems, while smaller distributed teams built more loosely coupled systems. However, Conway also found that the initial design of any system was almost never optimal or final – meaning that flexibility and agility are far more important design criteria. This flexibility and inbuilt capacity for change, are the reasons microservices have become the chosen architectural path for many organizations – large and small – as they seek to develop and scale the products we demand today, and those we will value tomorrow.
This vision for transformation is all very well, but those on the front-line tasked with implementing a microservices architecture have some fundamental questions to answer –
- How to move from a single normalized database?
- How to maintain state across services?
- How much complexity should be introduced to enable flexibility?
Self-contained Data Resources
Certainly a fundamental difference compared with traditional architecture is the expectation that each service should maintain its own data resources. This affords each service the ability to interact with its data in the most convenient form – often resulting in a mixture of SQL & NoSQL databases. The objective simply, that any given microservice be unconstrained and totally independent of other services in the system.
Of course traditional wisdom immediately warns of data fragmentation and redundancy, but this is compensated by the fact each microservice has clearly bounded context and independence. A more complex issue is data consistency – since each service must communicate somehow, and their respective data models aren’t connected, an intermediary is needed to manage data distribution.
At the heart of this data distribution challenge lies an event-driven application architecture. In this integration pattern, a microservice will publish an event for each business transaction or data update. Other interested microservices in the system can subscribe to those events. Then, once a service receives an event, it can use this data, perform the expected business task, and if necessary publish a new event.
NOTE: In contrast to other integration patterns where services are orchestrated and business logic centralized, an event-driven system assumes the service itself decides how to react when an event occurs.
Central to this architecture is a scalable, real-time data distribution layer that provides the infrastructure your microservices will use to publish and subscribe to events. But this alone isn’t enough to solve the data consistency problem.
In an event-driven model, multiple microservices may react to the same event at the same time. In traditional systems we’d rely on the 2-phase commit (2PC) approach, but that’s not viable in a cloud-native microservices architecture where connectivity gets in the way of atomicity.
In simple terms, each microservice must be capable of understanding and updating the state of an event. In less simple terms, a given microservice doesn’t store the state of a business entity but rather the state-changing events that can be used to determine the state of that entity.
The data distribution layer then, acts as an event store that enables each microservice to obtain the current (or initial) state of any given entity by loading the relevant events. In addition, the data layer can intelligently filter out-of-date, stale, or otherwise redundant data.
All a bit complex?
There is no doubt that a microservices architecture introduces complexity at many levels – even the definition of what a microservice is can be complicated. Ultimately any architecture or system design should focus on understanding and managing complexity, rather than simply avoiding it.
Here are some of the considerations we’ve covered (and more):
- Failing to consider the network – the Internet tends to be slower than in-process communication!
- Failing to implement some form of governance – don’t imagine that every platform, technology or design idea will be welcomed by Operations!
- Failing to plan, and planning to fail (sorry!) – we watched SOA (service-orientated architecture) projects cost people their jobs and their sanity! Any architecture needs upfront design, even if it changes.
- Failing to manage your data – scale and agility are quickly lost if you introduce coupling at the data layer.
Back in 2003, Martin Fowler wrote about defining application boundaries. The premise back then was that SOA would eliminate the “application” and everything would be an assembly of services. The world of microservices has perhaps reignited this notion.
“I don’t think applications are going away for the same reasons why application boundaries are so hard to draw. Essentially applications are social constructions:
- A body of code that’s seen by developers as a single unit
- A group of functionality that business customers see as a single unit
- An initiative that those with the money see as a single budget”
With all the complexity to manage, technology options to weigh, and architectural implications to consider – system design today can still be constrained by the same organizational boundaries Conway saw in ’67.
The Diffusion® Intelligent Event-Data Platform makes it easy to consume, enrich and deliver event-data in real-time across all network conditions. Push Technology pioneered and is the sole provider of real-time delta data streaming™ technology that powers mission-critical business applications worldwide. Leading brands use Push Technology to bring innovative products to market faster by reducing the software development efforts using the low-code features. The Diffusion® Intelligent Event-Data Platform is available on-premise, in-the-cloud, or in a hybrid configuration. Learn how Push Technology can reduce infrastructure costs, and increase speed, efficiency, and reliability, of your web, mobile, and IoT application.