0

We have an application pool dedicated to a WCF service that is called infrequently (maybe 15-20 times per day).  The calls can take several minutes, however, and the other day we got burned when IIS recycled the app pool while the call was still processing because the shutdown timeout ran out.

We're considering using request limit recycling, instead, but my question is this:  When the application pool recycles "after x requests", is that after the xth request completes?  Or does it kick off the request, start the overlapped process to handle new requests, then subject the xth request to the same shutdown timeout that currently burns us?

Question in a similar vein:
How to detect if the current application pool is winding up in IIS7.5 and Asp.Net 3.5+

Community
  • 1
  • 1
James King
  • 6,233
  • 5
  • 42
  • 63

2 Answers2

1

Check your Shutdown Time Limit setting on the app pool.

Regardless of how you do the recycling, this setting is checked to determine how long a request is allowed to carry on for before being forcibly shut down.

When an app pool is recycled, IIS attempts to drain the running requests from the app pool first, and then a in the meantime a new app pool is already started which accepts new requests. By making the setting high enough to accommodate your long running requests, you will allow IIS to safely drain the old app pool.

Avner
  • 4,286
  • 2
  • 35
  • 42
  • What you say isn't _quite_ correct... recycling overlaps the app pool, so the new one is started before the old one is terminated. But yeah, that's the setting that's burning us. I'm hesitant to bump it up to several minutes, but we may have to... it would have to match the WCF request timeout, though, to make any sense. – James King Apr 26 '17 at 13:32
  • right... ok yes you are correct about the timing of the app pool and I've made that edit. The answer is still correct though by your own admission so would appreciate you accepting it. BTW Your request timeout should already be long or you would already be experiencing terminated requests. You are right that the Shutdown Time Limit should be equal or less than the execution timeout. – Avner Apr 27 '17 at 01:30
  • I haven't forgotten your answer, just leaving the question open for other answers. I'm also looking for downsides to an extended shutdown timeout... to match the WCF service settings, I'd need to have the shutdown timeout set to 20 minutes (our worst-case scenario + padding) - not wild about that. I didn't mean to sound pedantic, sorry - that difference was the hope behind my question. With a new instance starting immediately, the current instance _could_ allow current requests to finish without impact. But I believe you're saying it doesn't, which is pretty much what I expected. – James King Apr 28 '17 at 17:08
  • Ideally, I'd like the shutdown timeout to only apply to the application shutdown events, which wouldn't fire until the in-process requests complete - those have their own timeouts, after all. I get the arguments against why it shouldn't, but then I've come full circle - if the timeout is short for a reason, then bumping it up is a bad idea. – James King Apr 28 '17 at 17:09
  • Hi James - no worries I get what you are saying. Yes I'm saying the current instance won't allow current requests to finish if they exceed the shutdown time limit. I don't follow your concern about setting the shutdown time limit to be high. You have a long running request already - you must provide for that long request to complete in your execution time limit and your app pool shutdown time limit. Another point from http://stackoverflow.com/a/7073044/1165140 is that it will wait up to the limit, if the request finishes earlier then it will terminate the app pool earlier. – Avner Apr 30 '17 at 07:02
  • Sorry, haven't been able to get back to this until today... it seems like I have no choice but to set the shutdown timeout to match the WCF timeout. Either that or disable recycling altogether... not wild about either choice, but that's unfortunately what I have to live with :( Thanks for the help! – James King May 31 '17 at 14:49
  • Thanks for taking the time to get back to this ! – Avner May 31 '17 at 22:56
0

I recommend you do the following.

1- Create a bool Ping() { return true;} method under your WCF service.

2- Create an IIS web application responsible of polling the Ping() method. This is the only way i found to keep my WCF services alive.

3- WCF long running operations must be called also from another background IIS process (web app) that must read from a message queue and call the WCF operation. So you need to log the WCF long running operation call requests in queues. This way, you will have the possibility of retrying the call if the app pool where your WCF services are hosted shuts down.

Houssam Hamdan
  • 888
  • 6
  • 15
  • I'm not clear on how this prevents a shutdown timeout... IIS has a built-in ping to the service, with configuration settings for how frequently and how long to wait before considering the app dead. And either way, the service isn't dead, it's busy... pinging wouldn't tell me whether a per-call instance has completed processing. This sounds like a pretty complicated design architecture, and I'm not seeing what it's buying me? (The call is made from a windows service that could certainly retry the operation, but in general, completing a current call is preferable to retrying) – James King Apr 26 '17 at 13:43
  • I understand your message. What appeared to me is that no matter what changes you make to the WCF app pool settings, the w3wp.exe process will shutdown after inactivity time. If you really want to keep it alive and enhance the performance of first call (after shutdown), the only way is to poll the service every x amount of minutes. If queuing and retrying is not a good solution then i can only advise you work on enhancing the performance of your wcf operation or segregate it to multiple small operations if possible. – Houssam Hamdan Apr 26 '17 at 16:40