3

I am trying to synchronize three microservices I have.

In order to do so I've implemented RabbitMQ. It seems as if things currently work but I am not sure if I’m following best practice and I couldn't find a reference to look it up, maybe someone could help me with that?

Brief of what I am trying to do: I have one service who should update the two others, each of the services should receive the message sent. I have two types of messages (save and delete resource). * In case of fault the queue should recover and resend the messages.

What I am currently doing: I've set up an exchange and each of my consumers connect to two different queues, one for each type of message (save/delete). I've used a direct exchange in order to filter the messages later on, even though currently I don't need to filter them.

Each of the queues are named, and the exchange as well as the messages are durable and I'am acking the messages I've consumed.

The question Should I set a different queue for each type of event or should I send the messages on the same queue and filter them? Is the use of RabbitMQ described above is the correct solution for the problem. What is the best practice?

straiker2
  • 165
  • 1
  • 2
  • 9

1 Answers1

3

Your setup is correct.

One common rule when designing queues in rabbit is, one queue for one type (type here means different handling logic) of consumers. So, since you have two types of consumers. They have different logic for different types of events (save/delete), one queue for each is exactly correct.

If you want to have only one type of consumer, which could handle both save and delete events, then to use one queue is also OK.

But two types of consumers and one shared queue will not work. Coz when multiple consumers subscribing to one queue, events are dispatched to consumers in kind of round-robin way, so either of your consumers could only receive half of the events.

Teddy Ma
  • 1,126
  • 6
  • 12
  • Thank you for your answer! I've encountered a new problem, if I fail to consume a message and I don't ack it. The process doesn't try and consume the message again. It never goes into a ready state on the queue but it is still in the queue. Any suggestions on how to solve it? – straiker2 Jul 13 '16 at 12:46
  • you should try catch your consuming logic, if any exception, and you are sure next try might succeed, you can send a nack to return the message to the queue, check this link: http://stackoverflow.com/questions/28794123/ack-or-nack-in-rabbitmq; and if you are sure that a retry will still fail (based on the caught exception), you should still send ack, and also log your exception for further analysis. – Teddy Ma Jul 14 '16 at 01:59