2

I have a spring boot (1.5.2) web application that runs in tomcat (8.5) as a sole web app. The total number of threads in the JVM (openjdk 1.8.0_181) increases monotonously almost (though not completely) at a constant rate, going from a few hundred in the beginning, to about 3000 in a week. By that time, the majority of the threads stack traces look like:

WAITING Thread[pool-917-thread-1,5,main]
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)

The application works otherwise fine, these threads do not seem to take up too much memory (at least compared to the tens of gigabytes the application normally consumes), but their existence points to some hidden leak inside the application. As of writing this, I can not currently find a thread named pool-.* in a different state, so I don't know what they normally do before going zombie. The application is never redeployed without a tomcat restart.

My question would be whether anyone encountered anything similar and how they solved it, and if not, how could I diagnose why these threads are being created and not removed afterwards.

P.Péter
  • 1,527
  • 16
  • 39
  • So you mean to say all threads are waiting and none have closed. Had it been an issue with tomcat, you would have easily found that out by now, anyways do mentioned the tomcat version you are using. Are you anywhere in your application setting up Executors, if yes. I would recommend checking that code or using the constructor that takes ThreadFactory as input. Override the newThread method in the factory to use custom names. If its your application then with the thread name you can easily figure out the faulty code and work on that particular piece. – Himanshu Bhardwaj Jan 18 '19 at 10:31
  • OK so it's a worker thread pool. What are the thread names? – Andy Brown Jan 18 '19 at 11:02
  • @AndyBrown thread names: `pool-[0-9]*-thread-1,5,main` – P.Péter Jan 18 '19 at 11:33

1 Answers1

0

The most likely causes for threads being spawned in this fashion:

  1. Tomcat was misconfigured e.g. executor maxThreads set to unreasonably high value from default 200.
  2. Application code creates an ubound thread pool e.g. Executors.newCachedThreadPool() which experiences a thread spike.
  3. During application re-deployment the thread pool created by undeployed application is not stopped correctly. See this answer.
  4. Application code creates threads with new Thread().

Point 4 is unlikely since your stack trace is showing java.util.concurrent.ThreadPoolExecutor. Find the pool which creates threads with prefix pool- (grep application code and Tomcat configuration for pool-). Then cap the pool.

Karol Dowbecki
  • 43,645
  • 9
  • 78
  • 111
  • 1
    Point 1 is not likely. Tomcat uses it's own `TaskQueue` class extending `LinkedBlockingQueue` and overriding `take()`. TaskQueue.take() would appear in the stack trace. – Andy Brown Jan 18 '19 at 11:06
  • Point 3 is not likely, we always restart tomcat before redeploying. I will edit the question to add additional info. – P.Péter Jan 18 '19 at 11:34
  • Point 1 is also not likely, as those should start with catalina-exec- as server.xml states: ` – P.Péter Jan 18 '19 at 11:49