I am a complete newbie to rabbitmq messaging, apologies if this question is silly or my setup completely pear-shaped.
My setup where I use rabbitmq is sending messages from certain probes. Each probe has a unique name. I have then a centralised server where I process the data - if there is a need.
I use a direct exchange and routing keys that correspond to probe names.
I declare my consumer (server) as follows (this is more or less from rabbitmq tutorials):
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.exchange_declare(exchange="foo", type="direct")
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
If at some point I become interested in what a probe is reporting, I issue
channel.queue_bind(exchange="foo", queue=queue_name, routing_key="XXX")
where XXX is the name of the probe.
My publishers at the probes are declared as follows:
connection = pika.BlockingConnection(pika.ConnectionParameters(host="foo.bar.com"))
channel = connection.channel()
channel.exchange_declare(exchange="foo", type="direct")
and when I send a message, I use
channel.basic_publish(exchange="foo", routing_key="XXX", body=data)
where XXX is the name of the probe.
This all works fine. But how do I make it so that messages to routing keys that no one is listening to get discarded immediately? Now if my consumer stops listening to a routing key or is not running at all, messages sent by probes start piling up. When I start my consumer or have it listen to a routing key is has not been listening to in a while, I might have tens of thousands of messages backlog there. This is not what I need, and that backlog is bound to cause a resource exhaustion somewhere.
Is there a way to modify this so that messages get discarded instead of queued if there is no one listening to them when they arrive at the exchange? I would assume there is a way but Google and pika documents did not help.
Thanks in advance.