3

I've an issue with one of our applications. The application is a self-writen Java application that connects through JMS to more than 50 different message queues and consumes messages from those queues.

From a functional perspective the processing of all messages from the different queues works fine. However while testing we found out that the processing of the different messages is far to slow. We are only able to process a few messages per queue per minute.

To better understand what is going on I've made with JMC a flightrecording and saw that there is a lot of blocking time for each thread that consumes messages from a message queue:

Picture: Blocking JMS threads

Beside this graph I saw also in the flight recording that a lot of time is spent with accesing a specific WeakHashMap to close and get and XAResource.

Picture: Lock instances

The next step what I did was analysing how the JMS bitronix configuration looks like. Here are the relevant parts:

On the Tomcat server-level I've got the resource.properties file that is loaded by bitronix:

resource.cf1.className=com.ibm.mq.jms.MQXAQueueConnectionFactory
resource.cf1.uniqueName=jms/cf
resource.cf1.minPoolSize=1
resource.cf1.maxPoolSize=60
resource.cf1.driverProperties.hostName=genadev0059.mycompnany.com
resource.cf1.driverProperties.port=1515
resource.cf1.driverProperties.channel=APPL_CHL
resource.cf1.driverProperties.transportType=1
resource.cf1.driverProperties.queueManager=DEV

Inside of Spring application XML I have the following bean definitions to settup the connection:

<jee:jndi-lookup id="connectionFactory" jndi-name="jms/cf" resource-ref="true" proxy-interface="javax.jms.ConnectionFactory"/>

<bean id="userCredentialsConnectionFactory" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter" p:targetConnectionFactory-ref="connectionFactory" p:username="$jms{jmsuser}" p:password="$jms{jmspwd}"/>

<bean id="cachedConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory" p:sessionCacheSize="$fwk{jms.connectionFactory.sessionCacheSize}" p:targetConnectionFactory-ref="userCredentialsConnectionFactory"/> 

<bean id="parentJmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer" abstract="true" p:connectionFactory-ref="cachedConnectionFactory" p:sessionTransacted="true" p:transactionManager-ref="transactionManager"

p:autoStartup="$fwk{jms.listener.start}"/>

Additional to this one, I've for each message queue and own class that processes messages from this queue:

<bean id="messageQueueThread1" parent="parentJmsContainer">
     <property name="destinationName" value="queue1" />
     <property name="messageListener">
            <bean class="com.mycompany.service.jms.Queue1Listener" />
     </property>
</bean>

I assume that the error is related to the way how the connection is configured. I tried different other approaches, but the result of the blocking threads is always the same.

Any inputs or suggestions are highly welcomed.

Tianico
  • 31
  • 2

1 Answers1

1

How many "actual" connections is there on the queue manager side? You should be using 1 connection per thread. If you share a connection between the threads then that is why you see the blocking.

Roger
  • 7,062
  • 13
  • 20
  • Based on what I monitored over WebSphere MQ Administrator I see that there a lot of **shared** connections for the queues that are consumed by this application. Further I have monitored with netstat what is going on on the client-side and there I see also a lot of outgoing connections to the MQ server. All of them are going over the same port 1515. Further the PID for all of them is 2488. Interesting is that for about 80% of them, the connection state is "established" and for 20% it is "waiting" (with PID 0). Question would be how I can influence this behaviour over the config – Tianico Jan 30 '17 at 13:57