I have a question concerning the scalability within a microservice architecture:
Independent from the inter service communication style (REST HTTP or messsage based), if a service scales, which means several replicas of the service are going to be launched, how is a shared main memory realized? To be more precise, how can instance1 access the memory of instance2?
I am asking this question because a shared non in-memory database between all instances of a service can be way to slow in read and write processes.
Could some expert in designing scalable system architecture explain, what exactly is the difference in using the (open source) Redis solution or using the (open source) Hazlecast solution to this problem?
And as another possible solution: Designing scalable systems with Rabbitmq:
Is it feasible to use message queues as a shared memory solution, by sending large/medium size objects within messages to a worker queue?
Thanks for your help.