1

I'm creating a Windows Service which has a component which listen's to a named pipe to interact with programs run from userspace. I've used the code from this answer as a base for a multithreaded server implementation, however I get strong code smell from the ProcessNextClient action being called in a tight loop, reproduced below. Is there really no better way to know when an opening for another stream is to be added to the named pipe than to repeatedly catch an IOException and try again?

    public void ProcessNextClient()
    {
        try
        {
            NamedPipeServerStream pipeStream = new NamedPipeServerStream(PipeName, PipeDirection.InOut, 254);
            pipeStream.WaitForConnection();

            //Spawn a new thread for each request and continue waiting
            Thread t = new Thread(ProcessClientThread);
            t.Start(pipeStream);
        }
        catch (Exception e)
        {//If there are no more avail connections (254 is in use already) then just keep looping until one is avail
        }
    }
Community
  • 1
  • 1
psaxton
  • 1,693
  • 19
  • 24
  • What I'm looking for, I suppose, is either: a way to keep track of the number of threads/tasks which are still active and thus have open pipes; or a way to block on the new NamedPipeServerStream until resources are available without creating a spinlock. – psaxton Mar 27 '13 at 18:54

2 Answers2

1

You could defer to WCF to handle the pipes? You would benefit from an interrupt driven system using IO Completion Ports to notify your application code when new connections were made into the application.

Taking the pain of implementing WCF would also give you the ability to scale off one machine if you need to take your application over more than one node just by changing the binding from a pipe to a TCP/http binding.

An example implementation of a WCF service is here. It also shows how you could host the same service on pipes or on TCP.

Spence
  • 28,526
  • 15
  • 68
  • 103
0

It looks to me like the the code will sit at

pipeStream.WaitForConnection();

until a client is detected and then continue. I don't think it's looping it like you described unless it's being hammered with clients. You could always add a breakpoint to check.

jugg1es
  • 1,560
  • 18
  • 33
  • It will, right up until the limit for server instances is reached. Then it is a throw->catch tight loop. I guess what I'm looking for is a way to block until at least one of the existing tasks/threads completes releasing its hold on the pipe. – psaxton Mar 26 '13 at 21:15
  • What if you added a Thread.Sleep to the error catch? It would pause the application thread to prevent excessive cpu load. Or you could queue the requests. – jugg1es Mar 26 '13 at 21:27
  • Sleep will at least prevent the process from hogging the CPU, but the inelegance is still there. Could you perhaps elaborate on what you mean by queuing the requests? – psaxton Mar 26 '13 at 21:43
  • It's not always a no-no to use try-catches in .NET. There are some situations where it is unavoidable, particularly when dealing with things that are out of your control, such as 'file in use' issues or when you have clients hammering on your windows service. In those situations, it might be better to simply use the 'does it work' threshold. By queuing, I mean that once the limit is reached, you could add the client to a Queue and then check the queue before you start listening for a new client – jugg1es Mar 26 '13 at 21:54
  • It's not that I am trying to avoid the try catch, it's that I'm trying to avoid the [spinlock](http://en.wikipedia.org/wiki/Spinlock) of the [tight loop](http://en.wiktionary.org/wiki/tight_loop). Without an open NamedPipeServerStream, there is no client to queue. With one, there is no need to block. – psaxton Mar 27 '13 at 18:48
  • Well I was just throwing out ideas. I don't think you should worry too much about using the try/catch as long as it works. Like I said, sometimes its unavoidable. If there are more clients than available connections, then they'll just have to wait. – jugg1es Mar 27 '13 at 18:53