2

I have a named pipe server written in C#. The gist of the implementation is:

    void BeginWaitForNextConnection()
    {
        var pipe = new NamedPipeServerStream(
            PipeName,
            PipeDirection.InOut,
            NamedPipeServerStream.MaxAllowedServerInstances,
            PipeTransmissionMode.Byte,
            PipeOptions.Asynchronous,
            0, // default in buffer size
            0, // default out buffer size
            CreateAllAccessPipeSecurity());

        pipe.BeginWaitForConnection(ClientRequestHandler, pipe);
    }

    void ClientRequestHandler(IAsyncResult ar)
    {
        // Clean up the async call state.
        NamedPipeServerStream pipe = (NamedPipeServerStream)ar.AsyncState;

        pipe.EndWaitForConnection(ar);

        // If we've been asked to shut down, go away.
        if (_stopping)
        {
            pipe.Close();
            return;
        }

        // Set up for the next caller.
        BeginWaitForNextConnection();

        // Handle this client's I/O. This code wraps the pipe stream in BinaryReader and BinaryWriter objects and handles communication with the client.
        HandlePipeClient(pipe);
    }

This works perfectly fine -- until multiple instances are trying to connect in quick succession. My client code specifies a 10 second timeout, so I would expect that even if 10 instances tried to connect in the same second, they should all succeed, because it shouldn't take 10 seconds for this code to cycle through 10 iterations of ClientRequestHandler callbacks through into BeginWaitForNextConnection -- but this is indeed what I see. For the occasional one-off connection, this code is very reliable, but if I hit it with frequent requests, it appears that perhaps if a request for connection arrives between the callback and the next BeginWaitForConnection, that connection does not queue up and get picked up immediately -- it simply gets lost.

Is this expected?? What is the idiomatically correct solution? Do I just have to spool up a whole bunch of threads all waiting for connections at once?

Jonathan Gilbert
  • 3,526
  • 20
  • 28
  • I just spent some time trying to produce a minimal reproduction of the problem. I am able to reproduce it fairly reliably -- though, oddly, it appears one of the magic ingredients is using the Take Command shell for automation. If I simply write a program that runs clients as quickly as possible using `Process.Create`, then it doesn't seem possible to trigger the problem, even with 100 clients starting as quickly as possible. But, if I enter into TCC `for /L %i in (1,1,50) do start Client.exe`, then a significant but unpredictable number of the clients can't connect to the server. – Jonathan Gilbert Mar 15 '19 at 04:03
  • If I run my driver twice in quick succession, both times every client succeeds. If I use the TCC automation and get errors, and then run my driver immediately thereafter, the instances created by my driver have problems as well. It appears that _something_ that TCC is doing is causing an issue of some sort with pipe resources, but I can't imagine what it would be. – Jonathan Gilbert Mar 15 '19 at 04:05

1 Answers1

0

I had a similar need.

I ended up with a server thread per connection, with each thread created right after the previous client connection.

private void ServerConnectToPipe()
{
    var bw = new BackgroundWorker();
    bw.DoWork += (s, e) => DoServerStuff();
    bw.RunWorkerAsync();
}

private void DoServerStuff()
{
    var serverStream = new NamedPipeServerStream("PipeName", PipeDirection.In, NamedPipeServerStream.MaxAllowedServerInstances);
    var streamReader = new StreamReader(serverStream);

    serverStream.WaitForConnection();

    ServerConnectToPipe();

    Log("Server connection", $"Client connected !");
    while (true)
    {
        var line = streamReader.ReadLine();
        // do stuff
    }
}
Kevin Dimey
  • 709
  • 6
  • 15