1

I have a web application written in java with Spring 4.0 and deployed on tomcat (on red hat linux). On profiling our webapp with JProfiler we found that most of the time is spent inside Springframework and this is causing a slowdown in our API's. e.g. Consider the below mentioned data which shows that out of 614 seconds, 609 seconds are spent inside spring, and this is for 105 API calls, which means per API call the time is ~ 6 second. So I wanted to know if there is configuration in spring which can avoid this overhead?

EDIT: Adding some more data that I got on using JProfiler

  91.0% - 614 s org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest 
        90.2% - 609 s org.springframework.web.method.support.InvocableHandlerMethod.getMethodArgumentValues
            55.9% - 377 s org.springframework.validation.DataBinder.convertIfNecessary
            34.2% - 231 s org.springframework.web.method.annotation.RequestParamMethodArgumentResolver.resolveName
        0.8% - 5,709 ms org.springframework.web.method.support.InvocableHandlerMethod.invoke

EDIT: ON drilling down further I found that out of this 90.2%, 88% time is eaten by the below 2 methods

  org.springframework.util.ConcurrentReferenceHashMap.put
  org.springframework.util.ConcurrentReferenceHashMap.get

and they are being called from org.springframework.core.ResolvableType.forType

Has anybody also observed this on linux with Spring app?

FYI My Controller method has 23 query parameters a 9 of them are List<>, will this create any problem? Am I not supposed to have these many query parameters (@RequestParam) ?

Rahul
  • 824
  • 1
  • 12
  • 27

2 Answers2

1

park does not perform a busy wait. It actually doesn’t even know the condition the thread is waiting for. That’s up to the caller. However you still can have a lot of CPU consumption if park is called very often, e.g. unpark has been called but after re-checking the wait condition park is called again. Then the fixed overhead of park will accumulate.

So the situation you have here seems to be that you have heavy contention on a particular lock. From the stack trace you have posted I would guess the the ConcurrentReferenceHashMap has been configured for a concurrency level far smaller than your actual number of threads.

Holger
  • 285,553
  • 42
  • 434
  • 765
  • I agree, thats what I had observed and written in the question that the caller method java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued loops and calls park again and again until it suceeds, so its acquireQueued which is doing a busy wait. Will change the question title as its misleading. Thanks – Rahul Jul 31 '14 at 10:43
  • @Rahul: but the AQS does not do “busy wait” *because it calls* `park`. `park` will release the CPU. – Holger Jul 31 '14 at 10:49
  • But as per my understanding the implementation of park in os_linux.cpp is not releasing the cpu as it is using pthread_mutex_trylock(mutex) and if this fails (because somebody else has already acquired that mutex), it will return immediately from there and AQS will call park again in its for(;;) loop. here is the code snippet // Don't wait if cannot get lock since interference arises from unblocking. Also. check interrupt before trying wait if (Thread::is_interrupted(thread, false) || pthread_mutex_trylock(_mutex) != 0) { return; } – Rahul Jul 31 '14 at 10:54
  • That can happen only if another thread is calling `unpark` at the same time. But anyway, you are looking at the wrong place. As already said, it’s not the implementation of `park` that matters but the fact that so many threads are contending for the same lock. – Holger Jul 31 '14 at 12:13
0

Seems like a custom argument resolver did the trick. After creating one as mentioned at

Spring HandlerMethodArgumentResolver not executing
the jprofiler timings are fine and my customArgumentResolver is not eating as much CPU as HandlerMethodArgumentResolverComposite.resolveArgument was doing.

Now that extra 90.2 % time has reduced to just 2% (10's of seconds)

Community
  • 1
  • 1
Rahul
  • 824
  • 1
  • 12
  • 27