3

We're building a new web-based industrial application and one of the questions that are hammering our heads for the last few days is about the integration between different "microservices" on this architecture.

I'm using microservices with just a pinch of salt because we're not totally embracing the concepts to define real microservices. One (and I think the biggest) difference relies on the fact that we're using the same shared database along the different modules (that I'm calling "microservices"). A sort-of logical view of our system could be drawn as:

                  ╔══════════════╗
                  ║    Client    ║ ══╗
                  ╚══════════════╝   ║ (2)
                                     ║
                                     ▼        
        ╔══════════════╗  (1) ╔══════════════╗
        ║  Serv. Reg.  ║ <==> ║  API Gatew.  ║
        ╚══════════════╝      ╚══════════════╝
            █       █   █████████████     (4)
           █         █              ████
╔══════════════╗  ╔══════════════╗  ╔══════════════╗
║   Module A   ║  ║   Module B   ║  ║   Module C   ║  <===== "Microservices"
╚══════════════╝  ╚══════════════╝  ╚══════════════╝
        ║║ (3)           ║║ (3)            ║║ (3)
        ║║               ║║                ║║
╔══════════════════════════════════════════════════╗
║                Database Server                   ║
╚══════════════════════════════════════════════════╝

Some things that we've already figured out:

  • The Clients (External Systems, Frontend Applications) will access the different Backend Modules using the Discovery/Routing pattern. We're considering the mix of Netflix OSS Eureka and Zuul to provide this. Services (Modules A,B,C) registers themselves (4) on the Service Registration Module and the API Gateway coordinates (1) with the Register to find Service Instances to fullfill the requests (2).
  • All the different Modules use the same Database. (3) This is more of a client's request than a architecture decision.

The point that we (or me, personally) are stuck is about how to do the communication between the different modules. I've read a ton of different patterns and anti-patterns to do this, and almost every single one will tell that API Integration via RestTemplate or some specialized client like Feign or Ribbon.

I tend to dislike this approach for some reasons, mainly the synchronous and stateless nature of HTTP requests. The stateless-ly nature of HTTP is my biggest issue, as the service layer of different modules can have some strong bindings. For example, a action that is fired up on Module A can have ramifications on Modules B and C and everything needs to be coordinated from a "Transaction" standpoint. I really don't think HTTP would be the best way to control this!

The Java EE part inside of me screams to use some kind of Service Integration like EJB or RMI or anything that does not use HTTP in the end. For me, it would be much more "natural" to wire a certain Service from Module B inside Module A and be sure that they participate together on a transaction.

Another thing that needs to be emphasized is that paradigms like eventual inconsistencies on the database are not enough for our client, as they're dealing with some serious kind of data. SO, the "I promise to do my best with the data" does not fit very well here.

Time for the question:

Is this "Service Integration" really a thing when dealing with "Microservices"? Or the "Resource Integration" wins over it?

It seems that Spring, for example, provides Spring Integration to enable messaging between services as would a tech like EJB do. Is this the best way to integrate those services? Am I missing something?

PS: You may call my "Microservices" as "Microliths", how we usually name them around here. :)

Gustavo Ramos
  • 1,324
  • 1
  • 12
  • 23
  • 4
    Are you using the same database server, but different schemas for each microservice? Or are you really using the same schema and same tables between several microservices? For me, it sounds like a microservice architecture is not really suitable for your requirements, given the strong bindings between those modules. – dunni Jan 17 '19 at 07:14
  • @dunni yeah, I also believe that the microservice architecture is not the best way to describe what we are trying to achieve. All the different services (modules) use the same database and same schema on the database. The main goal with this architecture is to have smaller pieces of a whole - as to say a microlith - so they can be scaled up easily. – Gustavo Ramos Jan 17 '19 at 07:19
  • May help you : https://stackoverflow.com/questions/9795677/how-to-design-global-distributed-transactionnone-database-can-jta-use-for-non – Mehraj Malik Jan 17 '19 at 07:41
  • @MehrajMalik not exactly what I'm looking for. That link relates to the Java Transaction API and Java EE patterns for implementing global transactions. My question is more about transactions on distributed systems _without_ using Java EE, towards more freedom and flexibility. – Gustavo Ramos Jan 17 '19 at 07:44

3 Answers3

2

AFAIK, usually, a Microservice approach assumes that the data is not shared between these services at the level of the same database/schema.

This means that Microservice is implemented as an "expert" of some notion of a domain from your business logic which is wide/complicated enough to "deserve" a microservice.

Otherwise many of advantages of the microservice architecture disappear:

  1. Independence. Well, we can deploy new versions of microservice at own own pace, do not share even technological stack, and so on and so forth. Now, If I change a schema (even slightly), say adding or in more extreme cases changing or removing the column for example, how will other microservices know that this has happened. May I actually deploy each servier by its own I now I have to deploy all of them at once? Sadly the second option is the only viable way to go here. So no real independence

  2. Scalability. Its true that each microservice can scale independently and it sounds great. But now, since they all share the same storage, all the operations will be performed against this storage so at the scale it will quickly become a real bottleneck, thus no real scalability is applied.

You see where it goes...

Now the question of transactions or, in a more general sense, data integrity is indeed one of the biggest challenges to be addressed when moving to microservices approach.

Usually (again from my experience) people adjust their use cases in a way that eventual consistency is also good enough and then there is no issue. One way or another, things like Distributed Transactions should be better avoided because they extremely complicate the code and a real nightmare to maintain.

So bottom line, you should first think whether the microservice approach is good for your application, and if so, how to really decompose the data in a way that all the main use-cases will be met.

Mark Bramnik
  • 39,963
  • 4
  • 57
  • 97
  • The problem with this approach is that on tight-coupled domain industries (like mine), if we wanted to really isolate the data units down to different services, the resulting services would not be micro in any way! For example: one of the applications that we did develop in the past had ~350 entities on the database. Some central entities - maybe 50 of them - are virtually accessed by almost all functionalities of the system. When we think about getting the service bigger to really isolate the domain, I think the very essence of "micro" is lost. Ideas? – Gustavo Ramos Jan 17 '19 at 07:40
  • Regarding shared data, Chris Richardson introduces the [shared database](https://microservices.io/patterns/data/shared-database.html) pattern on the microservices architecture. Is this really valid? He seems to be quite a reference on this question, but anyway it's better to ask! – Gustavo Ramos Jan 17 '19 at 07:42
  • Nice link, but read his own comment when people ask about connection limit: This pattern is more of an anti-pattern. It's best to use the Database per Service pattern. Each service has a private schema (perhaps on the same database server) and transactions do not span schemas. That way you can simply add database instances. – Mark Bramnik Jan 17 '19 at 07:52
  • Yeah, you are right. Can you see one of my last responses on the OP? My problem with Database per Service would be that our domain is very much coupled. If I go this way, the resulting service would not be micro at all! – Gustavo Ramos Jan 17 '19 at 07:55
  • Regarding your first comment. indeed microservices is not for everyone. However it happens very often that because of monolith, its actually easy to create these interdependencies and "accessing by all functionalities of the system" - which is a messy approach regardless the architecture. So after a couple of years the system becames intertwined in a way that microservice decomposition becomes very hard. So it kind of works in an opposite direction – Mark Bramnik Jan 17 '19 at 07:56
2

Microsevices approach coming to picture when you do domain driven design. First of all you should have deep knowledge of the domain addressing by your application. And should have the capability to identify the bounded context of the modules of your applications. If you have identified that kind of granular modules you can easily convert them to a microservice. Each and every bounded context should be highly independent, that means a single transaction has to be handled within a microservice. Since you are using common data base, I am assuming it is an RDBMS strictly following ACID property. If you have to attain ACID property across your transactions with DB each and every microservice that you are going to create has to handle transaction management independently. In other way if you can conceptualize transactions as an order of events and each event by itself self maintained the state of it you can evolve your microservices as engines to take care these events. If you end up with a design where microservices has to depend on each other a lot and they are not capable to handle the states/events/transactions by itself in majority of use cases you have to rethink on your approach. What you are trying to achieve is called shared database pattern. Some people considers it as an anti-pattern.

Steephen
  • 14,645
  • 7
  • 40
  • 47
0

You could look into Domain-Driven Design to think about the contexts of your services. An aggregate should be your boundary for a transaction.

To think about the communication between the services you can try to identify Events and use these the coordinate your state changes in several services. If you really need a lot of data from multiple external services to do a transaction you could keep a copy of the external data in the service needing the data. Projections these are usually called and can be kept up to date by the events published by the external services.

Generally it is seen as a bad practice to share a table between services, only 1 service should be owner.

Jeff
  • 1,871
  • 1
  • 17
  • 28