2

I have lately noticed that my c3p0 connection pool keeps growing in size and reaches to the max limit I have specified at times. I initially thought that this might be happening because of the application code not returning connection back to the pool. This was not the case. To simulate the problem on development environment, I did a simple test:

I wrote an API that does nothing but a very simple database query and returns the result. Following is the c3p0 pool setting I am using:

#Pool configuration
c3p0.initialPoolSize=10
c3p0.minPoolSize=10
c3p0.maxPoolSize=100
c3p0.acquireIncrement=1
c3p0.maxIdleTime=1800
c3p0.unreturnedConnectionTimeout=20
c3p0.idleConnectionTestPeriod=600
c3p0.testConnectionOnCheckout=true

On hiting the test API with concurrency 10, I expect the pool size to never grow beyond 10. But it goes to 20-30 at times. What explains this?

I know I can get the pool size reduced by setting maxIdleTimeExcessConnections to a lower value. What what is causing it to grow at the first place?

Here are the versions I am using:

c3p0: 0.9.1.2
hibernate entitymanager: 4.2.1.Final
spring-data-jpa: 1.3.4.RELEASE
azi
  • 929
  • 1
  • 11
  • 31
  • it may or may not affect your concern (i don't remember if there've been changes that might affect this), but c3p0-0.9.1.2 is almost 9 years old. the current version is 0.9.5.2. a first thing to try would be to upgrade. – Steve Waldman May 16 '16 at 04:48
  • thanks @SteveWaldman... tried upgrading, but it didn't help... arounf 380 connections for 50 concurrent requests.. 20 for 10 concurrent... had changed maxPoolSize to 400 for 50 concurrent... to see the upper limit it goes to.... for 50 concurrent, i should probably increase numHelperThreads from default 3... but even that shouldn't lead to increase in pool size.. right? – azi May 16 '16 at 05:20
  • so, c3p0 in general increases its target size to at least `acquireIncrement` above its current size whenever it 1) it has not yet maxed out; and 2) it encounters a request it cannot immediately serve. See https://github.com/swaldman/c3p0/blob/9f97c814aef31b2997d6ecfad1e3875c6136317b/src/java/com/mchange/v2/resourcepool/BasicResourcePool.java#L613-642 if you are curious. That means a low number of `numHelperThreads` could affect this. – Steve Waldman May 16 '16 at 05:40
  • If the thread pool is backed up, then Connections will remain unavailable for some time after check-ins and after idle tests are scheduled and before they are run. The effective size of the pool will be lower than the number of Connections checked out by the number of non-close maintenance tasks held up in the thread pool. So that's one hypothesis for why you see what you are seeing. Note that c3p0 makes no guarantees that 10 client threads would never provoke pool expansion from 10. If the overhead of managing the Connections means those 10 Connections can't be promptly served, it makes more. – Steve Waldman May 16 '16 at 05:44
  • that said, one would hope that the overhead of managing the Connections wouldn't require multiples of the number of client threads. but if the thread pool is badly backed up, it might. making `numHelperThreads` large or monitoring the Thread pool via JMX would definitely be good places to start. – Steve Waldman May 16 '16 at 05:47

1 Answers1

0

Try to enable logging for your connection pool, so that you know exactly how it is requesting new Connections.

Please see post below which gives various troubleshooting techniques:

Running out of DB connections!

Community
  • 1
  • 1
shankarsh15
  • 1,947
  • 1
  • 11
  • 16