1

I have created simple tcp server - it works pretty well.

the problems starts when we switch to the stress tests -since our server should handle many concurrent open sockets - we have created a stress test to check this. unfortunately, looks like the server is choking and can not respond to new connection request in timely fashion when the number of the concurrent open sockets are around 100.

we already tried few types of server - and all produce the same behavior.

the server: can be something like the samples in this post(all produce the same behavior)

How to write a scalable Tcp/Ip based server

here is the code that we are using - when a client connects - the server will just hang in order to keep the socket alive.

enter code here

public class Server

{
    private static readonly TcpListener listener = new TcpListener(IPAddress.Any, 2060);

    public Server()
    {
        listener.Start();
        Console.WriteLine("Started.");

        while (true)
        {
            Console.WriteLine("Waiting for connection...");
            var client = listener.AcceptTcpClient();
            Console.WriteLine("Connected!");
            // each connection has its own thread
            new Thread(ServeData).Start(client);
        }
    }

    private static void ServeData(object clientSocket)
    {
        Console.WriteLine("Started thread " + Thread.CurrentThread.ManagedThreadId);

        var rnd = new Random();
        try
        {
            var client = (TcpClient)clientSocket;
            var stream = client.GetStream();
            byte[] arr = new byte[1024];
            stream.Read(arr, 0, 1024);
            Thread.Sleep(int.MaxValue);

        }
        catch (SocketException e)
        {
            Console.WriteLine("Socket exception in thread {0}: {1}", Thread.CurrentThread.ManagedThreadId, e);
        }
    }
}

the stress test client: is a simple tcp client, that loop and open sokets, one after the other

class Program
    {
        static List<Socket> sockets;
        static private void go(){
            Socket newsock = new Socket(AddressFamily.InterNetwork,
                                  SocketType.Stream, ProtocolType.Tcp);
            IPEndPoint iep = new IPEndPoint(IPAddress.Parse("11.11.11.11"), 2060);
            try
            {
                newsock.Connect(iep);
            }
            catch (SocketException ex)
            {
                Console.WriteLine(ex.Message );
            }
            lock (sockets)
            {
                sockets.Add(newsock);
            }

        }
        static void Main(string[] args)
        {
            sockets = new List<Socket>();
            //int start = 1;// Int32.Parse(Console.ReadLine());
            for (int i = 1; i < 1000; i++)
            {   
                go();
                Thread.Sleep(200);
            }
            Console.WriteLine("press a key");
            Console.ReadKey();




        }
    }
}

is there an easy way to explain this behavior? maybe c++ implementation if the TCP server will produce better results? maybe it is actually a client side problem?

Any comment will be welcomed !

ofer

Community
  • 1
  • 1
ofer
  • 4,366
  • 9
  • 38
  • 39
  • 1
    What OS are you testing on? You should not be creating a new thread for every connection, you should look at using the async interface for general scalability see BeginAccept, BeginReceive etc. – Chris Taylor Dec 29 '10 at 14:59
  • win 7. I do aware to the fact that the async approach is preferable. however, the behavior described above occurs on any server implementation i tried - including the callback .. – ofer Dec 29 '10 at 15:17
  • 1
    You should not start one thread per client. This does not scale well as the process will spend more time scheduling than doing the actual work. Use the thread pool instead. You should also close the Streams once they are use (look at the C# using pattern). – Simon Mourier Dec 29 '10 at 18:12
  • First, as others have noted, do not create one thread per conection. Use the async API. Even if your server didn't do anything else (like, say, serve content), you'd be limited to [around 2000 threads](http://blogs.technet.com/b/markrussinovich/archive/2009/07/08/3261309.aspx) in any 32-bit process. Secondly, please edit your question supplying the following information: OS (yes, this is very important) and the error message that you get (and where you get it). – Stephen Cleary Dec 29 '10 at 20:01

2 Answers2

0

Specify a huge listener backlog: http://msdn.microsoft.com/en-us/library/5kh8wf6s.aspx

fejesjoco
  • 11,763
  • 3
  • 35
  • 65
  • well. I tried setting the backlog to Int32.MaxValue - didn't help much – ofer Dec 29 '10 at 15:18
  • Yeah, the tcplistener also defaults to int32.maxvalue, which happens to be a special constant that uses a system default, which I guess will be around 100 (128 I think). Lucky huh? Try 10000. (Source: http://msdn.microsoft.com/en-us/library/ms739168%28VS.85%29.aspx) – fejesjoco Dec 29 '10 at 16:48
  • In this situation you shouldn't NEED a 'huge' listen backlog. The listen backlog is used to size the queue of connections that are in the process of being established. Here we have a single connection per 200ms, even with the not especially scalable, thread per connection, design the server should be able to accept connections faster than the client is initiating them, especially since the client is issuing blocking connect calls and so the 200ms timer doesn't even START until the connect has completed... – Len Holgate Dec 29 '10 at 19:56
0

Firstly a thread per connection design is unlikely to be especially scalable, you would do better to base your design on an asynchronous server model which uses IO Completion Ports under the hood. This, however, is unlikely to be the problem in this case as you're not really stressing the server that much.

Secondly the listen backlog is a red herring here. The listen backlog is used to provide a queue for connections that are waiting to be accepted. In this example your client uses a synchronous connect call which means that the client will never have more than 1 connect attempt outstanding at any one time. If you were using asynchronous connection attempts in the client then you would be right to look at tuning the listen backlog, perhaps.

Thirdly, given that the client code doesn't show that it sends any data, you can simply issue the read calls and remove the sleep that follows it, the read calls will block. The sleep just confuses matters.

Are you running the client and the server on the same machine?

Is this ALL the code in both client and server?

You might try and eliminate the client from the problem space by using my free TCP test client which is available here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html

Likewise, you could test your test client against one of my simple free servers, like this one: http://www.lenholgate.com/blog/2005/11/simple-echo-servers.html

I can't see anything obviously wrong with the code (apart from the overall design).

Len Holgate
  • 21,282
  • 4
  • 45
  • 92