1

I'm trying to nail down a performance issue under load in an application which I didn't build, but have become very familiar with the workings of.

The architecture is: mobile apps call an ASP.NET MVC 3 website to get data to display. The ASP.NET site calls a third-party SOAP API using WCF clients (basicHttpBinding), caching results as much as it can to minimize load on that third party.

The load from the mobile apps is in the order of 200+ requests per second at peak times, which translates to something in the order of 20 SOAP requests per second to the third-party, after caching.

Normally it runs fine but we get periods of cascading slowness where every request to the API starts taking 5 seconds.. then 10.. 15.. 20.. 25.. 30.. at which point they time out (we set the WCF client timeout to 30 seconds). Clearly there is a bottleneck somewhere which is causing an increasingly long queue until requests can't be serviced inside 30 seconds.

Now, the third-party API is out of my control but they swear that it should not be having any issues whatsoever with 20 requests per second. So I've been looking into the possibility of a bottleneck at my end.

I've read questions on StackOverflow about ServicePointManager.DefaultConnectionLimit and connectionManagement, but digging through the source, I think the problem is somewhat more fundamental. It seems that our WCF client object (which is a standard System.ServiceModel.ClientBase<T> auto-generated by "Add Service Reference") is being stored in the cache, and thus when multiple requests come in to the ASP.NET site simultaneously, they will share a single Client object.

From a quick experiment with a couple of console apps and spawning multiple threads to call a deliberately slow WCF service with a shared Client object, it seems to me that only one call will occur at a time when multiple threads use a single ClientBase. This would explain a bottleneck when e.g. 20 calls need to be made per second and each one takes more than 50ms to complete.

Can anyone confirm that this is indeed the case?

And if so, and if I switched to every request creating it's own WCF Client object, I would just need to alter ServicePointManager.DefaultConnectionLimit to something greater than the default (which I believe is 2?) before creating the Client objects, in order to increase my maximum number of simultaneous connections?

(sorry for the verbose question, I figured too much information was better than too little)

Community
  • 1
  • 1
Carson63000
  • 4,215
  • 2
  • 24
  • 38
  • Is your System.ServiceModel.ClientBase object instantiated per web request? – Dmitry Harnitski Aug 14 '12 at 01:18
  • @DmitryHarnitski: no, it is stored in the cache (standard `HttpRuntime.Cache`), so each web request will get the same object from the cache. – Carson63000 Aug 14 '12 at 01:23
  • 1
    That is a bottleneck you are looking for. You may want to cache data but not the WCF client. Client creation is relativity cheap operation after client created first time. – Dmitry Harnitski Aug 14 '12 at 01:31
  • 1
    @DmitryHarnitski: turns out that client creation is actually **not** cheap, it takes 50-100ms and creating a new client for every web request spiked our CPU to 100%. I'm trying out creating a pool of clients at app startup, and allocating them round-robin to web requests. – Carson63000 Aug 15 '12 at 06:24
  • It is not cheap first time. After that it is cached by .net and it is much cheaper. http://stackoverflow.com/questions/10859832/why-is-the-first-wcf-client-call-slow – Dmitry Harnitski Aug 15 '12 at 10:52

0 Answers0