10

I've been banging my head against this issue for at least a week now (learned something new too - WCF is a major PITA).

Here's my problem: I have a scenario in my app that at some point freezes the whole client for, like, forever (because I disabled the timeouts, as both the client and server are in a controlled environment). The deadlock happens exactly on the same call, I presume due to the burst of requests preceding it.

Inspecting the deadlock stack trace on the client gives me this:

[In a sleep, wait, or join] 
WindowsBase.dll!System.Windows.Threading.DispatcherSynchronizationContext.Wait(System.IntPtr[] waitHandles, bool waitAll, int millisecondsTimeout) + 0x26 bytes 
mscorlib.dll!System.Threading.SynchronizationContext.InvokeWaitMethodHelper(System.Threading.SynchronizationContext syncContext, System.IntPtr[] waitHandles, bool waitAll, int millisecondsTimeout) + 0x1c bytes   
[Native to Managed Transition]  
[Managed to Native Transition]  
mscorlib.dll!System.Threading.WaitHandle.InternalWaitOne(System.Runtime.InteropServices.SafeHandle waitableSafeHandle, long millisecondsTimeout, bool hasThreadAffinity, bool exitContext) + 0x2b bytes 
mscorlib.dll!System.Threading.WaitHandle.WaitOne(int millisecondsTimeout, bool exitContext) + 0x2d bytes    
mscorlib.dll!System.Threading.WaitHandle.WaitOne() + 0x10 bytes 
System.Runtime.DurableInstancing.dll!System.Runtime.TimeoutHelper.WaitOne(System.Threading.WaitHandle waitHandle, System.TimeSpan timeout) + 0x7c bytes 
System.ServiceModel.dll!System.ServiceModel.Channels.OverlappedContext.WaitForSyncOperation(System.TimeSpan timeout, ref object holder) + 0x40 bytes    
System.ServiceModel.dll!System.ServiceModel.Channels.PipeConnection.WaitForSyncRead(System.TimeSpan timeout, bool traceExceptionsAsErrors) + 0x38 bytes 
System.ServiceModel.dll!System.ServiceModel.Channels.PipeConnection.Read(byte[] buffer, int offset, int size, System.TimeSpan timeout) + 0xef bytes 
System.ServiceModel.dll!System.ServiceModel.Channels.DelegatingConnection.Read(byte[] buffer, int offset, int size, System.TimeSpan timeout) + 0x21 bytes   
System.ServiceModel.dll!System.ServiceModel.Channels.ConnectionUpgradeHelper.InitiateUpgrade(System.ServiceModel.Channels.StreamUpgradeInitiator upgradeInitiator, ref System.ServiceModel.Channels.IConnection connection, System.ServiceModel.Channels.ClientFramingDecoder decoder, System.ServiceModel.IDefaultCommunicationTimeouts defaultTimeouts, ref System.Runtime.TimeoutHelper timeoutHelper) + 0xb3 bytes  
System.ServiceModel.dll!System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.SendPreamble(System.ServiceModel.Channels.IConnection connection, System.ArraySegment<byte> preamble, ref System.Runtime.TimeoutHelper timeoutHelper) + 0x155 bytes  
System.ServiceModel.dll!System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.DuplexConnectionPoolHelper.AcceptPooledConnection(System.ServiceModel.Channels.IConnection connection, ref System.Runtime.TimeoutHelper timeoutHelper) + 0x25 bytes  
System.ServiceModel.dll!System.ServiceModel.Channels.ConnectionPoolHelper.EstablishConnection(System.TimeSpan timeout) + 0xe2 bytes 
System.ServiceModel.dll!System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.OnOpen(System.TimeSpan timeout) + 0x37 bytes 
System.ServiceModel.dll!System.ServiceModel.Channels.CommunicationObject.Open(System.TimeSpan timeout) + 0x13f bytes    
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannel.OnOpen(System.TimeSpan timeout) + 0x52 bytes    
System.ServiceModel.dll!System.ServiceModel.Channels.CommunicationObject.Open(System.TimeSpan timeout) + 0x13f bytes    
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannel.CallOpenOnce.System.ServiceModel.Channels.ServiceChannel.ICallOnce.Call(System.ServiceModel.Channels.ServiceChannel channel, System.TimeSpan timeout) + 0x12 bytes  
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannel.CallOnceManager.CallOnce(System.TimeSpan timeout, System.ServiceModel.Channels.ServiceChannel.CallOnceManager cascade) + 0x10c bytes    
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannel.Call(string action, bool oneway, System.ServiceModel.Dispatcher.ProxyOperationRuntime operation, object[] ins, object[] outs, System.TimeSpan timeout) + 0x18b bytes    
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(System.Runtime.Remoting.Messaging.IMethodCallMessage methodCall, System.ServiceModel.Dispatcher.ProxyOperationRuntime operation) + 0x59 bytes    
System.ServiceModel.dll!System.ServiceModel.Channels.ServiceChannelProxy.Invoke(System.Runtime.Remoting.Messaging.IMessage message) + 0x65 bytes    
mscorlib.dll!System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(ref System.Runtime.Remoting.Proxies.MessageData msgData, int type) + 0xee bytes    
MyService.dll!MyService.Controller.CallMethod() + 0x9 bytes

The reason I suspect the bursting calls sequence is that if I insert a sleep of 60s before the call is made, the deadlock doesn't occur.

Does anybody have any suggestion on how to avoid this issue?

P.S. I'm using named pipes.

EDIT:

The call to the WCF service on the client side happens on the GUI thread. Am I right to assume (from the callstack) that it tries to access the GUI thread, which is causing the deadlock?

EDIT:

Client side channel factory initialization:

var binding = new NetNamedPipeBinding
    {
        OpenTimeout = TimeSpan.MaxValue,
        CloseTimeout = TimeSpan.MaxValue,
        SendTimeout = TimeSpan.MaxValue,
        ReceiveTimeout = TimeSpan.MaxValue,
        ReaderQuotas = { MaxStringContentLength = Int32.MaxValue, MaxArrayLength = Int32.MaxValue },
        MaxBufferPoolSize = Int32.MaxValue,
        MaxBufferSize = Int32.MaxValue,
        MaxReceivedMessageSize = Int32.MaxValue
    };
CustomBinding pipeBinding = new CustomBinding(binding);
pipeBinding.Elements.Find<NamedPipeTransportBindingElement>().ConnectionPoolSettings.IdleTimeout = TimeSpan.FromDays(24);
channelFactory = new ChannelFactory<ITestsModule>(pipeBinding,
    new EndpointAddress(string.Format("net.pipe://localhost/app_{0}/TestsModule", ProcessId)));

Server side host initialization:

var host = new ServiceHost(m_testModule, new Uri[] { new Uri(string.Format("net.pipe://localhost/app_{0}", Process.GetCurrentProcess().Id)) });
ServiceThrottlingBehavior throttle = host.Description.Behaviors.Find<ServiceThrottlingBehavior>();

if (throttle == null)
{
    throttle = new ServiceThrottlingBehavior();
    throttle.MaxConcurrentCalls = 500;
    throttle.MaxConcurrentSessions = 200;
    throttle.MaxConcurrentInstances = 100;
    host.Description.Behaviors.Add(throttle);
}

ThreadPool.SetMinThreads(1000, 1000);

var binding = new NetNamedPipeBinding
    {
        OpenTimeout = TimeSpan.MaxValue,
        CloseTimeout = TimeSpan.MaxValue,
        SendTimeout = TimeSpan.MaxValue,
        ReceiveTimeout = TimeSpan.MaxValue,
        ReaderQuotas = { MaxStringContentLength = Int32.MaxValue, MaxArrayLength = Int32.MaxValue },
        MaxBufferPoolSize = Int32.MaxValue,
        MaxBufferSize = Int32.MaxValue,
        MaxReceivedMessageSize = Int32.MaxValue
    };

CustomBinding pipeBinding = new CustomBinding(binding);
pipeBinding.Elements.Find<NamedPipeTransportBindingElement>().ConnectionPoolSettings.IdleTimeout = TimeSpan.FromDays(24);

host.AddServiceEndpoint(typeof(ITestsModule), pipeBinding, "TestsModule");

Service class behavior:

[ServiceBehavior(
    InstanceContextMode = InstanceContextMode.Single,
    ConcurrencyMode = ConcurrencyMode.Multiple,
    UseSynchronizationContext = false,
    IncludeExceptionDetailInFaults = true
)]
CosminB
  • 173
  • 8
  • 1
    Please provide more details on the WCF configuration (binding, instance mode, concurrency, etc.) – Johann Blais Jul 25 '12 at 07:48
  • Updated with details on WCF configuration... – CosminB Jul 25 '12 at 08:23
  • How do you host the service? IIS? Windows service? UI application? – Johann Blais Jul 25 '12 at 08:34
  • I think you should consider more practical timeout values. 29247 years is a bit much... – tobias86 Jul 25 '12 at 08:40
  • What does "burst of requests" mean? Are you calling the service multiple times before this specific one? – tobias86 Jul 25 '12 at 08:41
  • The service is self-hosted. By "burst of requests" I mean that there are several requests (~15) coming in a short interval (1-2s). The reason I mentioned this is because by waiting 60s before the final call (that deadlocks), I avoid the deadlock. I haven't tried waiting for less than 60s yet. And yes, I know I should set more reasonable timeout values, but these are just for testing. – CosminB Jul 25 '12 at 08:49
  • Have you tried setting the MaxConnections property on the service binding configuration? The default value is usually 10. It could be that by the time this call is made, there are still at least 10 active connections to the service. Try setting it to something like 150. – tobias86 Jul 25 '12 at 08:51
  • Just tried it with 150 MaxConnections and it still reproduces. – CosminB Jul 25 '12 at 09:38
  • What about service throttling on the service behaviour? Try setting the `MaxConcurrentCalls`, `MaxConcurrectInstances` and `MaxConcurrentSessions` to 150. – tobias86 Jul 25 '12 at 10:50
  • I've updated the initialization of the service with the latest code. Setting the throttling behavior doesn't fix the issue. – CosminB Jul 25 '12 at 11:39
  • Also, I've ruled out another possible cause: the deadlock doesn't occur because I make the call on the GUI thread (I was thinking that maybe having the GUI thread locked somehow generated the deadlock). It deadlocks on any thread. – CosminB Jul 25 '12 at 11:40
  • Narrowed down the Thread.Sleep() range where the deadlock occurs. If I let it sleep for 11s, it reproduces. If I let it sleep for 15s, it works fine... – CosminB Jul 25 '12 at 12:46
  • Tried another thing, calling, in the same spot, a different function instead (similar in signature). Deadlocked as well, so I suspect some thread pool scheduler issue, or some inner WCF behavior that blocks on bursts (still unclear if server side or client side)... – CosminB Jul 26 '12 at 12:37
  • It is an issue with the SyncronizationContext and the WCF calls. I'm having the same deadlocks without doing bursts of calls. Check this out: http://stackoverflow.com/questions/1949789/using-synchronizationcontext-for-sending-events-back-to-the-ui-for-winforms-or-wp – Ignacio Soler Garcia Apr 19 '17 at 17:09

1 Answers1

0

First, you know what you're locking on server side? Is lock contention coming only from WCF interface? Or your server also locking from other components/classes elsewhere? That's the most important question and it has nothing to do with WCF.

Now, this said, try this to help narrow down the issue:

OPTION 1: Timeout's on client side - don't set to Int32.MaxValue, set to say ten seconds, and implement client side re-try on timeout's.

OPTION 2:

ServiceThrottlingBehavior ThrottleBehavior = new ServiceThrottlingBehavior();
ThrottleBehavior.MaxConcurrentSessions = 4;
ThrottleBehavior.MaxConcurrentCalls = 4;
ThrottleBehavior.MaxConcurrentInstances = 4;

ServiceHost Host = ...
Host.Description.Behaviors.Add(ThrottleBehavior);

If OPTION 2 helps, stress test it (well should do same with OPTION 1)- also watch out for build up of thread counts if MaxConcurrentXXX set to a big number.

Hope this helps