0

I have a question concerning the scalability within a microservice architecture:

Independent from the inter service communication style (REST HTTP or messsage based), if a service scales, which means several replicas of the service are going to be launched, how is a shared main memory realized? To be more precise, how can instance1 access the memory of instance2?

I am asking this question because a shared non in-memory database between all instances of a service can be way to slow in read and write processes.

Could some expert in designing scalable system architecture explain, what exactly is the difference in using the (open source) Redis solution or using the (open source) Hazlecast solution to this problem?

And as another possible solution: Designing scalable systems with Rabbitmq:

Is it feasible to use message queues as a shared memory solution, by sending large/medium size objects within messages to a worker queue?

Thanks for your help.

Jwf
  • 175
  • 2
  • 14
  • Message queues generally don't handle large objects. See [this answer](https://stackoverflow.com/a/47544515/1472222) for details. The rest of your question is at best too broad. Can you narrow it down, or ask a more specific question? – theMayer Nov 16 '19 at 01:41

1 Answers1

1

several instances of the service are going to be launched, how is a shared main memory realized? To be more precise, how can instance1 access the memory of instance2?

You don't. Stateless workload scales by adding more replicas. It is important that those replicas are in fact stateless and loosely coupled - shared nothing. All replicas can still communicate with an in-memory service, or database, but that stateful service is it's own independent service (in a microservice architecture).

what exactly is the difference in using the (open source) Redis solution or using the (open source) Hazelcast solution to this problem?

Both is a valid solution. Which is best for you depends on what libraries, protocols, or integration patterns is best for you.

Is it feasible to use message queues as a shared memory solution, by sending large/medium size objects within messages to a worker queue?

Yes, that is perfectly fine. Alternatively you can use a distributed pub-sub messaging platform like Apache Kafka or Apache Pulsar

Grzegorz Piwowarek
  • 13,172
  • 8
  • 62
  • 93
Jonas
  • 121,568
  • 97
  • 310
  • 388
  • sry, it seems that i mixed the terminology here: with instance i meant replica here. Not sure what else can be meant by instance. I don't meant a physical instance. – Jwf Nov 17 '19 at 15:02
  • thanks for the answer, I probably give a direct use case example to give a better basis to discuss. One main question is, can database access be a huge bottleneck and can one overcome this bottleneck by using a memory solution and when yes, which is an appropriate one to the actual problem? – Jwf Nov 17 '19 at 15:05
  • @Jwf instances and replicas is the same thing in my answer, sorry for not using the same word as you did. – Jonas Nov 17 '19 at 16:23
  • @Jwf databases can be a bottleneck, but that is a different question. You can scale databases or add caches... it all depends on your situation. – Jonas Nov 17 '19 at 16:24
  • thanks for your input. I posted a more specific use case to this topic here: https://stackoverflow.com/questions/58902946/microservices-architecture-for-highly-frequent-data-access-in-memory-solutions – Jwf Nov 17 '19 at 17:07
  • like to here your input on this @Jonas – Jwf Nov 17 '19 at 17:07