0

I am programming with sockets (TcpListener and TcpClient actually) in C#. I wrote a server that accepts client connections and streams data to them.

In order to test scalability, I wrote a test harness that creates a certain number of connections (say 1000) in a loop, connects to the server, and writes whatever data is received to the console.

After the server receives about 1300 connections, the clients' connection attempts start failing with a regular "No connection could be made because the target machine actively refused it" exception. If the clients keep trying, some connections get through, but there are still many of them that don't. I even tried putting in delays, e.g. three simultaneous clients each opening one connection per second to the server, but the problem remains.

My guess was that the listen backlog was becoming full, but given the delays I introduced, I now doubt it. How can this behaviour be explained and solved?

Edit: before anyone else jumps on this question and marks it as duplicate without having read it...

I am using asynchronous sockets using the Asynchronous Programming Model. That's the old BeginXXX/EndXXX pattern, not the new async/await pattern. The APM uses the Thread Pool underneath, so this is not a naive one-thread-per-connection model. The connections are dormant most of the time unless I/O occurs. In that case, the .NET Framework automatically allocates threads to handle this.

Edit 2: The gist of this question, for those who thought it was too [insert silly adjective here], is: why does a server drop connections when under a heavy load? The error message I quoted usually occurs when a connection cannot be established (i.e. when you got the ip/port wrong), but this clearly isn't the case.

Gigi
  • 28,163
  • 29
  • 106
  • 188
  • At the end - do not use the non-scalable API. Please use the link I provided - it has references and explanations on how to write a scalable server. – TomTom Jun 12 '14 at 14:17
  • The link you provided has nothing to do with my question! I am not using async/await or anything. I'm trying to understand why connections are dropped. And I'm using the old Begin/End asynchronous calls. – Gigi Jun 12 '14 at 14:22
  • The problem is: If you do not do async you run a thread per connection. THat is not scalable - b happy you did get 1300 connections to start with. The overhead is staggering. Using async / select etc. with a SMALL number of threads handling a large number of connections is the ONLY way to build a scalable server. – TomTom Jun 12 '14 at 14:24
  • I *AM* using asynchronous sockets! Will you please read a question (and comments) before you jump on it and mark it as duplicate? – Gigi Jun 12 '14 at 14:26
  • It is definitely not a thread per connection. Asynchronous sockets use the thread pool underneath. Can you please un-duplicate this, maybe I can get a decent answer? – Gigi Jun 12 '14 at 14:28
  • Reopened, but I still vote to close - you seem to (a) provide no sensible information and (b) not know how to use a profiler. I suggest ou attaching a profiler and finding out where the time is spent as well as providing code samples. We can not provide high level teaching here - and fo ra specific problem you really are way too abstract. – TomTom Jun 12 '14 at 14:34
  • Thanks for your suggestions. It would be very helpful of you if you could explain how I can determine why my server is dropping connections by using a profiler. – Gigi Jun 12 '14 at 14:44
  • Your server will not drop connections without the program being too busy. The question is what it is busy with. It the connect queue is too slow- guess what, something is making it slow. And ap rofiler will help you find out why it is fallin behind. – TomTom Jun 12 '14 at 14:46

0 Answers0