3

I have written a high performance TCP server in C# using SocketAsyncEventArgs. I have been testing its performance with two very simple clients, each creating 2000 parallel continuous loops. One client makes use of asynchronous calls to TcpClient; the other makes use of synchronous calls.

Asynchronous

Parallel.For(0, numClients, parallelOptions, async i =>
{
    while (true)
    {
        var tcpClient = new TcpClient();

        try
        {
            await tcpClient.ConnectAsync(host, port);

            await tcpClient.GetStream().WriteAsync(message);

            var buffer = new byte[1024];
            await tcpClient.GetStream().ReadAsync(buffer, 0, 1024);

            tcpClient.GetStream().Close();
        }
        catch (Exception ex)
        {
            Console.WriteLine($"{DateTime.Now.ToLongTimeString()}: {ex.Message}");
        }
        finally
        {
            tcpClient.Close();
            tcpClient.Dispose();
        }
    }
});

Synchronous

Parallel.For(0, numClients, parallelOptions, i =>
{
    while (true)
    {
        var tcpClient = new TcpClient();

        try
        {
            tcpClient.Connect(host, port);

            tcpClient.GetStream().Write(message);

            var buffer = new byte[1024];
            tcpClient.GetStream().Read(buffer, 0, 1024);

            tcpClient.GetStream().Close();
        }
        catch (Exception ex)
        {
            Console.WriteLine($"{DateTime.Now.ToLongTimeString()}: {ex.Message}");
        }
        finally
        {
            tcpClient.Close();
            tcpClient.Dispose();
        }
    }
});

The synchronous version iterates continuously without any errors.

The asynchronous version, however, results in many No connection could be made because the target machine actively refused it errors. My assumptions are that this client is flooding the TCP listen backlog queue, causing subsequent inbound connections to be rejected.

What's going on? How can I protect server throughput from clients that choose to connect asynchronously?

Tom Davis
  • 393
  • 3
  • 10
  • Take a look at this question https://stackoverflow.com/questions/10342006/iis-request-limit – Juan Aug 09 '18 at 19:14
  • If the call is to the same web service you can create one where you receive a list of the objects and then call it once. And other idea is to count an x amount of calls and the wait for them to complete and call again. – Juan Aug 09 '18 at 19:16
  • What are the values of: `numClients, parallelOptions`? All async calls are awaited so there shouldn't be an difference in actual simultaneous connections. – Stefan Aug 09 '18 at 19:19
  • 1
    You should not be using an `async` delegate inside a `Parallel.For`, this results in an unawaitable `async void`. `Parallel.For` is primarily meant for CPU bound operations. For `async` IO bound use a `Select` or even just a `foreach` – JSteward Aug 09 '18 at 19:32

1 Answers1

0

Parallel.For is designed to wait for completion of all tasks inside the loop. But in case of asynchronous calls the loop delegate completes immediately returning a promise for internal asynchronous call. However "nobody" outside does wait for this promise completion because Parallel.For isn't designed for that which leads to the situation when Parallel.For quickly fires the tasks which are able only to start very first asynchronous operation (which is ConnectAsync) and that in turns most likely leads to the SYN flooding. But there can be other side effects caused by the fact the asynchronous operations aren't awaited.

Dmytro Mukalov
  • 1,949
  • 1
  • 9
  • 14