0

I have a 3 broker setup of ActiveMQ v5.14.1. The setup involves set of composite-destinations that pulls a copy of message from another queue. Following is the configuration of one of the broker -

    <broker xmlns="http://activemq.apache.org/schema/core"
            brokerName="brokerC"
            dataDirectory="${activemq.data}"
            schedulePeriodForDestinationPurge="10000"
            schedulerSupport="true">

    <destinationPolicy>
        <policyMap>
          <policyEntries>
            <policyEntry queue=">" gcInactiveDestinations="true" inactiveTimoutBeforeGC="30000" >
               <deadLetterStrategy>
                  <sharedDeadLetterStrategy processExpired="false" />
               </deadLetterStrategy>

              <networkBridgeFilterFactory>
                <conditionalNetworkBridgeFilterFactory replayWhenNoConsumers="true"/>
              </networkBridgeFilterFactory>
            </policyEntry>
            <policyEntry topic=">" >
              <pendingMessageLimitStrategy>
                <constantPendingMessageLimitStrategy limit="1000"/>
              </pendingMessageLimitStrategy>
            </policyEntry>
          </policyEntries>
        </policyMap>
    </destinationPolicy>

    <!-- Added entry for network of brokers -->
    <networkConnectors>
      <networkConnector name="linkFromCToA"
                        uri="static:(tcp://xx.xxx.xx.xxx:61616)"
                        useVirtualDestSubs="true"/>

      <networkConnector name="linkFromCToB"
                        uri="static:(tcp://xx.xxx.xx.xxx:61616)"
                        useVirtualDestSubs="true"/>
    </networkConnectors>
    <managementContext>
        <managementContext createConnector="false"/>
    </managementContext>

    <persistenceAdapter>
        <kahaDB directory="${activemq.data}/kahadb"/>
    </persistenceAdapter>


      <systemUsage>
        <systemUsage>
            <memoryUsage>
                <memoryUsage percentOfJvmHeap="70" />
            </memoryUsage>
            <storeUsage>
                <storeUsage limit="20 gb"/>
            </storeUsage>
            <tempUsage>
                <tempUsage limit="50 gb"/>
            </tempUsage>
        </systemUsage>
    </systemUsage>

    <transportConnectors>
        <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
        <transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
        <transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&amp;wireFormat.maxFrameSize=104857600"/>
    </transportConnectors>

    <!-- destroy the spring context on shutdown to stop jetty -->
    <shutdownHooks>
        <bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
    </shutdownHooks>

    <destinationInterceptors>
      <virtualDestinationInterceptor>
        <virtualDestinations>
          <compositeQueue name="Q.1" forwardOnly="false">
            <forwardTo>
              <queue physicalName="Q.2" />
            </forwardTo>
          </compositeQueue>
        </virtualDestinations>
      </virtualDestinationInterceptor>
    </destinationInterceptors>


</broker>

These configurations are similar on all 3 of the brokers (of course except the broker url's). On dlq of all 3 brokers after some time i see the following exception -

java.lang.Throwable: duplicate from store for queue://Q.2

This exception is in the header 'dlqDeliveryFailureCause' of the message sitting in dlq. I observed that in 1 broker setup this issue never comes up. It is only when I have 2 or more network of brokers setup.

Nimantha
  • 6,405
  • 6
  • 28
  • 69
adewan
  • 53
  • 1
  • 6

1 Answers1

2

For anyone getting stuck on this issue, have a look at the following link ActiveMQ User discussion.

Two suggestions 1. disable audit at queue level and 2. change messageTTL=2 (coz i have 3 brokers).

adewan
  • 53
  • 1
  • 6
  • Two years later, did you find an explanation or solution to this issue? The linked discussion is still inconclusive. I have the identical issue with 5.14.1 and composite queue at broker side that throws the same error and drop messages into DLQ. – DoNuT Oct 02 '18 at 14:18
  • 1
    @DoNuT I don't remember things 100% now but somewhat the cause was following - In Network of Broker setup, we identified that if Broker C has the consumer and message travels from Broker A -> Broker B->Broker C then Broker B would get essentially get 2 duplicate copies of same message - 1 forwarded from Q.1 and the other from Q.2. Essentially we solved this situation by migrating to Master-Slave configuration (more because that was a better fit for our use case). If you try identifying what messages are hopping between brokers (that should not be hopping), should help you solve this. – adewan Oct 02 '18 at 18:31
  • Thanks for coming back so quick. Sadly, my setup is a single-broker AMQ, the only specific part is a forwarding (queue->queue and topic->queue). In my case, before the duplicate being thrown into DLQ, I see a `EOFException` from the producers connection followed by the _duplicate from store_ error seconds after. Producers are using Openwire and reconnects, I guess this is beyond the scope of this ticket and maybe I get a better picture if I have a look on logs from the client/producer side. – DoNuT Oct 03 '18 at 09:17
  • 1
    @DoNuT maybe this will help - https://stackoverflow.com/a/39920227/2112865. See if you have connections not closed cleanly. Assuming this is the case, you may also have duplicate copies generated. – adewan Oct 03 '18 at 19:10
  • Happens in conjunction with VPN site-to-site connection drops, but as it is an unintended drop, I cannot assure 100% availability, maybe it has something to do with reconnect (always occurs roughly 20 secs after the client dropped out). – DoNuT Oct 04 '18 at 16:06