5

I am stumped. Perhaps someone can shed some light on WCF client behavior I am observing.

Using the WCF samples, I've started playing with different approaches to WCF client/server communication. While executing 1M of test requests in parallel, I was using SysInternals TcpView to monitor open ports. Now, there are at least 4 different ways to call the client:

  1. Create the client, do your thing, and let GC collect it
  2. Create the client in a using block, than do your thing
  3. Create the client channel from factory in a using block, than do your thing
  4. Create the client or channel, but use WCF Extensions to do your thing

Now, to my knowledge, only options 2-4, explicitly call client.Close(). During their execution I see a lot of ports left in the TIME_WAIT state. I'd expect option 1 to be the worst case scenario, due to reliance on the GC. However, to my surprise, it seems to be the cleanest of them all, meaning, it leaves no lingering ports behind.

What am I missing?

UPDATE: Source code

    private static void RunClientWorse(ConcurrentBag<double> cb)
    {
        var client = new CalculatorClient();
        client.Endpoint.Address = new EndpointAddress("net.tcp://localhost:8000/ServiceModelSamples/service");
        RunClientCommon(cb, client);                        
    }

    private static void RunClientBetter(ConcurrentBag<double> cb)
    {
        using (var client = new CalculatorClient())
        {
            client.Endpoint.Address = new EndpointAddress("net.tcp://localhost:8000/ServiceModelSamples/service");
            RunClientCommon(cb, client);
        }
    }

    private static void RunClientBest(ConcurrentBag<double> cb)
    {
        const string Uri = "net.tcp://localhost:8000/ServiceModelSamples/service";
        var address = new EndpointAddress(Uri);
        //var binding = new NetTcpBinding("netTcpBinding_ICalculator");
        using (var factory = new ChannelFactory<ICalculator>("netTcpBinding_ICalculator",address))
        {
            ICalculator client = factory.CreateChannel();
            ((IContextChannel)client).OperationTimeout = TimeSpan.FromSeconds(60);
            RunClientCommon(cb, client);
        }
    }

    private static void RunClientBestExt(ConcurrentBag<double> cb)
    {
        const string Uri = "net.tcp://localhost:8000/ServiceModelSamples/service";
        var address = new EndpointAddress(Uri);
        //var binding = new NetTcpBinding("netTcpBinding_ICalculator");
        new ChannelFactory<ICalculator>("netTcpBinding_ICalculator", address).Using(
            factory =>
                {
                    ICalculator client = factory.CreateChannel();
                    ((IContextChannel)client).OperationTimeout = TimeSpan.FromSeconds(60);
                    RunClientCommon(cb, client);
                });
    }
Darek
  • 4,687
  • 31
  • 47
  • 3
    You're missing some source code... Could we see your unit tests? – Shotgun Ninja May 08 '13 at 14:13
  • See http://stackoverflow.com/questions/573872/what-is-the-best-workaround-for-the-wcf-client-using-block-issue - the using block can cause issues with WCF. – TrueWill May 08 '13 at 14:21
  • Thanks for the link, quite interesting read, but it still does not explain why GC leaves no TIME_WAITs behind, but client.Close() does. – Darek May 08 '13 at 14:29

1 Answers1

1

I have figured it out, I think. The GC will not call Dispose on ClientBase. That's why the connections are not left in a TIME_WAIT state. So I decided to follow the same pattern and created a new WCF Extension:

    public static void UsingAbort<T>(this T client, Action<T> work)
        where T : ICommunicationObject
    {
        try
        {
            work(client);
            client.Abort();
        }
        catch (CommunicationException e)
        {
            Logger.Warn(e);
            client.Abort();
        }
        catch (TimeoutException e)
        {
            Logger.Warn(e);
            client.Abort();
        }
        catch (Exception e)
        {
            Logger.Warn(e);
            client.Abort();
            throw;
        }
    }
}

This way, at the end of a request it will simply Abort the connection instead of closing it.

Darek
  • 4,687
  • 31
  • 47
  • The problem with your new pattern is that Abort() doen't notify the service of the client shutdown. By not calling Close() on a open connection in the try block you are leaving connections open on the server until they time out. Suggested reading: http://stackoverflow.com/questions/573872/what-is-the-best-workaround-for-the-wcf-client-using-block-issue – ErnieL May 09 '13 at 13:04
  • I don't believe that's the case @ErnieL. Per Microsoft documentations, Abort() causes the ClientBase object to transition immediately from its current state into the closed state. This seems to be confirmed by the port shutting down server side. Am I missing something? – Darek May 13 '13 at 12:54
  • The documentation likes to say that Abort() is "immediate" and Close() is "graceful". For example: http://msdn.microsoft.com/en-us/library/ms195520.aspx. Put it this way: Your pattern *never* calls Close() and its well documented that Dispose() calls Close() and not Abort(). So if your pattern is right, why is Close() in the interface at all? – ErnieL May 13 '13 at 15:38
  • If the port, in TcpView on server side, changes status from ESTABLISHED to "gone", is that a confirmation that a connection was closed? If I call Close() or Dispose() it changes to TIME_WAIT, for 30s (or 4 minutes if you use the default registry setting). My pattern does not require multi-call processing, so from that perspective Abort is just as good as Close, if Abort actually closes the connection. – Darek May 13 '13 at 18:44