Example use case: There is one microservice which is holding terabytes of data in few tables which all belong to the same business domain.
- The import of data happens via AMQP messages in an event-driven manner (in CQRS terms - commands). The processing of a message with imported data might take up to some minutes. Losing such a message/command is not allowed.
- The imported data needs to be made accessible to other microservices of other bounded contexts via AMQP messaging in a way that a request/response message cycle takes maximally 1 second (in CQRS terms - queries). Losing a query message is allowed.
Initial considerations: The scalability, elasticity, performance, fault-tolerance and maybe some other -ility requirements for commands and queries are different. It also seems to be also tempting to isolate the query processing from command processing in different deployables, in order to address the -ility requirements for commands and queries processing in a best possible manner, by
- Avoiding cases, where query processing gets blocked by the long processing of commands
- Avoiding out of memory errors when commands and queries are processed at the same time
- In some tech stacks, such as Spring AMQP, the configuration for message delivery guarantees with lower performance vs. maximal performance with no message delivery guarantees are done centrally for the whole application, e.g. message acknowledgments, concurrency and etc. This might negatively impact query message processing performance
But then we might get into discussion of the following questions:
- Do we consider 2 deployables (1 for slow-command command, 1 for real-time query processing) as 2 microservices?
- Sharing a database schema between that 2 microservices/deployables is an anti-pattern and should not be done? vs. But we get benefits in terms of decoupling of the -ilities for commands and queries?
What are your thoughts and arguments on that?