1

I've got a NetTcp WCF service running on my server, and a client that's calling one of the methods of that service asynchronously.

When the server returns small amounts of data, everything is hunky dory. But when there's a lot of data to return, the client summarily crashes, without even so much as an exception to catch.

From what I can see, the crash happens almost immediately after the server returns the data.

Any ideas how to debug this? Is there some setting on the service that I should be tweaking to allow me to return large volumes of data in this call?

Shaul Behr
  • 36,951
  • 69
  • 249
  • 387
  • This could be due to max message size, see: http://stackoverflow.com/questions/884235/wcf-how-to-increase-message-size-quota/884248#884248 – Nate Apr 13 '11 at 17:23
  • How much is a lot of data ? Do you own the client, so you can make adjustments on it ? – driis Apr 13 '11 at 17:24
  • Can you get the data in chunks, or change the serializer to something more frugal? – Marc Gravell Apr 13 '11 at 17:29
  • @driis - it crashed under 2800 rows of data @ about 1k per row. Haven't tested to see at exactly what threshold it fails. But yes, I do own the client. Does that help? – Shaul Behr Apr 13 '11 at 19:04
  • @Marc Gravell - I'm open to getting the data in chunks - but how do you do that? – Shaul Behr Apr 13 '11 at 19:05
  • @Shaul I'd change the API to take data in logical pages (each paw could be thousands of records if needed), so that it isn't one huge call; and I'd change to use something like protobuf-net to make each call less bandwidth – Marc Gravell Apr 13 '11 at 20:24

3 Answers3

1

Try increasing the maxReceivedMessageSize="SomeMaxSize"

  <binding name="BasicHttp" closeTimeout="00:01:00"
        openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"
        allowCookies="false" bypassProxyOnLocal="true" hostNameComparisonMode="StrongWildcard"
        maxBufferSize="1000000" maxBufferPoolSize="524288" maxReceivedMessageSize="1000000"
        messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered"
        useDefaultWebProxy="true">
      <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
          maxBytesPerRead="4096" maxNameTableCharCount="16384" />
    </binding>

You can also enable tracing for your service by adding below section to your web.config.

<system.diagnostics>
<sources>
  <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true">
    <listeners>
      <add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\log\Traces.svclog" />
    </listeners>
  </source>
</sources>

Abhilash
  • 121
  • 1
  • 7
1

My approach here is:

  • split the data into several calls, essentially pages of (say) 500 rows each. A handful of round-trips (rather than 1) won't hurt latency, but will increase stability
  • change the serializer (I'm biased here, but I like protobuf-net) to reduce the bandwidth needed per call, ideally in combination with enabling MTOM

That should reduce the bandwidth in the different ways in parallel, and fix the issue.

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
0

Increasing the max size is not the best solution, if your service is hosted in IIS at some point it will crash. If you are using NetTCP bindig the best way of doing this is by exposing a Stream over TCP, look here.

Stefan P.
  • 9,489
  • 6
  • 29
  • 43