I am using a managed RabbitMQ cluster through AWS Amazon-MQ. If the consumers finish their work quickly then everything is working fine. However, depending on few scenarios few consumers are taking more than 30 mins to complete the processing. In that scenarios, RabbitMQ deletes the consumer and makes the same messages visible again in the queue. Becasue of this another consumer picks it up and starts processing. It is happing in the loop. Therefore the same transaction is getting executed again and I am loosing the consumer as well. I am not using any AcknowledgeMode so I believe it's AUTO by default and it has 30 mins limit. Is there any way to increase the Delivery Acknowledgement Timeout for AUTO mode? Or please let me know if anyone has any other solutions for this.
-
1As of now, there does not seem to be a way to change the configuration (rabbitmq.conf) of a managed aws rabbitmq instance. I have tried rabbitmqadmin. There is another tool called rabbitmqctl but I looked at the documentation and there doesn't seem to be an option to modify configuration either. Do you have an option of setting up rabbitmq on an EC2 instance? Then you can modify the rabbitmq.conf directly... # 30 minutes in milliseconds consumer_timeout = 1800000 – JCompetence Aug 30 '21 at 10:01
-
Your other option is to acknowledge messages right away, and then process them...but the problem with that is what happens if a message does not process correctly? It will be not be resent by AMQ again – JCompetence Aug 30 '21 at 10:02
-
Thank you! This is very helpful, I'll raise an AWS support ticket and see if they have any option to change this. – Swagat Aug 30 '21 at 12:32
-
1Hope you get an answer and if you do please update us here :). Good luck – JCompetence Aug 30 '21 at 12:51
2 Answers
Reply From AWS Support:
Consumer timeout is now configurable but can be done only by the service team. The change will be permanent irrespective of any version.
So you may update RabbitMQ to latest, and no need to stick with 3.8.11. Provide your broker details and desired timeout, they should be able to do it for you.

- 763
- 7
- 18
This is the response from AWS support.
From my understanding, I see that your workload is currently affected by the consumer_timeout parameter that was introduced in v3.8.15. We have had a number of reach outs due to this, unfortunately, the service team has confirmed that while they can manually edit the rabbitmq.conf, this will be overwritten on the next reboot or failover and thus is not a recommended solution. This will also mean that all security patching on the brokers where a manual change is applied, will have to be paused. Currently, the service does not support custom user configurations for RabbitMQ from this configuration file, but have confirmed they are looking to address this in future, however, is not able to an ETA on when this will available.
From the RabbitMQ github, it seems this was added for quorum queues in v3.8.15 (https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.8.15 ), but seems to apply to all consumers (https://github.com/rabbitmq/rabbitmq-server/pull/2990 ).
Unfortunately, RabbitMQ itself does not support downgrades (https://www.rabbitmq.com/upgrade.html ) Thus the recommended workaround and safest action form the service team, as of now is to create a new broker on an older version (3.8.11) and set auto minor version upgrade to false, so that it wont be upgraded. Then export the configuration from the existing RabbitMQ instance and import it into new instance and use this instance going forward.

- 709
- 3
- 9
- 27