2

I am attempting to load-test a Comet-ish server using a C# load testing client that creates many HttpWebRequests (1000+). I am finding that after a few minutes, randomly, the server takes a long time to receive some of the requests. The client thinks it sent the request successfully, but it actually takes 40s to arrive at the server, at which point it is too late. (Since this is a Comet-type server, the server ends up dropping the client session, which is bad). I tried switching from asynchronous calls to synchronous calls but it didn't make a difference.

The problem must be at the client end. I did some tracing with Wireshark and it turns out that the request actually does take 40 or so seconds to make it to the network pipe from the client software! The server services the request right away when it receives it on its pipe.

Maybe C# is messing up because the request looks exactly the same as a request I made earlier and caching it for some weird reason? I am including "Cache-Control:no-cache" in my responses to avoid caching altogether.

evilfred
  • 2,314
  • 4
  • 29
  • 44
  • What gets written to the body of the response for the failed requests? – Douglas Aug 31 '10 at 20:36
  • Good point, I'll check that... – evilfred Aug 31 '10 at 20:49
  • It turns out that the request DOES arrive, it just takes a long time to get to the server, at which point my server writes a response indicating that the session doesn't exist anymore (as expected in these circumstances). I updated the question. – evilfred Aug 31 '10 at 21:04
  • I tried setting AllowWriteStreamBuffering to false but it didn't help. As in: http://blogs.msdn.com/b/delay/archive/2009/09/08/when-framework-designers-outsmart-themselves-how-to-perform-streaming-http-uploads-with-net.aspx – evilfred Aug 31 '10 at 23:20

2 Answers2

2

I ran into a similar issue when I was first building my web crawler, which makes upwards of 2,000 requests every minute. The problem turned out to be that I wasn't always disposing of the HttpWebResponse objects in a timely fashion. The garbage collection / finalization mechanism will not keep up when you're making requests at that rate.

Whether you're doing synchronous or asynchronous requests doesn't really matter. Just make sure you always call response.Close().

Jim Mischel
  • 131,090
  • 20
  • 188
  • 351
  • I have been wrapping them all in using() blocks, which should call Close(), I believe? – evilfred Aug 31 '10 at 21:18
  • @evilfred: Yes, a using block should call Close(), although I've noticed that from time to time calling Close() will hang and I don't know why. I've found that calling request.Abort() followed by response.Close() is most effective. – Jim Mischel Aug 31 '10 at 21:56
  • K i'll try that. I was using the other order and thought it was working but it wasn't :S – evilfred Aug 31 '10 at 21:57
  • Nope, it's still screwing up, argh. – evilfred Aug 31 '10 at 22:05
  • Hi Jim, I'm intending to perform a similar operation where my client application will perform almost 2,000 requests in a minute, just to check if the web page is changed or not. Here is my original question asked on the forum. Can you please help me with? http://stackoverflow.com/questions/6239485/httpwebrequest-vs-webclient-special-scenario – code master Jun 06 '11 at 14:14
1

You may be hitting the default client connection limitation. By default, it's two connections, and any more queue up behind it.

To get around this, add this to your app.config file:

  <system.net>
    <connectionManagement>
      <remove address="*"/>
      <add address="*" maxconnection="10" />
    </connectionManagement>
  </system.net>

Experiment with maxconnection to see where your effective upper limit is.

Ed Power
  • 8,310
  • 3
  • 36
  • 42