2

Backgound: I must call a web service call 1500 times which takes roughly 1.3 seconds to complete. (No control over this 3rd party API.) total Time = 1500 * 1.3 = 1950 seconds / 60 seconds = 32 minutes roughly.

I came up with what I though was a good solution however it did not pan out that great. So I changed the calls to async web calls thinking this would dramatically help my results it did not.

Example Code:

Pre-Optimizations:

foreach (var elmKeyDataElementNamed in findResponse.Keys)
{

    var getRequest = new ElementMasterGetRequest
    {
        Key = new elmFullKey
        {
            CmpCode = CodaServiceSettings.CompanyCode,
            Code = elmKeyDataElementNamed.Code,
            Level = filterLevel
        }
    };

    ElementMasterGetResponse getResponse;
    _elementMasterServiceClient.Get(new MasterOptions(), getRequest, out getResponse);
    elementList.Add(new CodaElement { Element = getResponse.Element, SearchCode = filterCode });
}

With Optimizations:

var tasks = findResponse.Keys.Select(elmKeyDataElementNamed => new ElementMasterGetRequest
    {
        Key = new elmFullKey
            {
                CmpCode = CodaServiceSettings.CompanyCode,
                Code = elmKeyDataElementNamed.Code,
                Level = filterLevel
            }
    }).Select(getRequest => _elementMasterServiceClient.GetAsync(new MasterOptions(), getRequest)).ToList();

Task.WaitAll(tasks.ToArray());

elementList.AddRange(tasks.Select(p => new CodaElement
    {
        Element = p.Result.GetResponse.Element,
        SearchCode = filterCode
    }));

Smaller Sampling Example: So to easily test I did a smaller sampling of 40 records this took 60 seconds with no optimizations with the optimizations it only took 50 seconds. I would have though it would have been closer to 30 or better.

I used wireshark to watch the transactions come through and realized the async way was not sending as fast I assumed it would have.

Async requests captured Async requests captured

Normal no optimization Normal no optimization You can see that the asnyc pushes a few very fast then drops off... Also note that between requests 10 and 11 it took nearly 3 seconds.

Is the overhead for creating threads for the tasks that slow that it takes seconds? Note: The tasks I am referring to are the 4.5 TAP task library.

Why wouldn't the request come faster than that. I was told the Apache web server I was hitting could hold 200 max threads so I don't see an issue there..

Am I not thinking about this clearly? When calling web services are there little advantages from async requests? Do I have a code mistake? Any ideas would be great.

retslig
  • 888
  • 5
  • 22
  • Out of the 1.3 seconds it takes to execute a call, how much of that time is spent actually performing the work on the server? If the server is taking 1.2 seconds to execute the work and there are resource constraints, async might not make a difference. – Pete Apr 05 '13 at 18:42
  • Understood that the server may always take 1.3 seconds but if the requests run concurrently then it should take off more time correct? – retslig Apr 05 '13 at 18:52
  • It depends on resource constraints. Some things don't benefit from concurrency and some things are even hurt by it. It depends on what is causing the server to take 1.3 seconds. It may be something that doesn't benefit from concurrency. – Pete Apr 05 '13 at 19:17
  • Forget the time it takes to run on the server, the request that are coming to the server are what I am concerned about. Sometimes the server doesn't get my request for example requests 10 and 11 it took nearly 3 seconds to get to the server. I am not even looking at response from the server how long that is. I was just hoping that I could send all the requests within lets say 15 seconds and then get the response back whenever. – retslig Apr 05 '13 at 19:54
  • 1
    Presumably there's a limit on the number of concurrent requests from a single IP address, in addition to the overall concurrent requests number. That is, while the server might be able to handle 200 requests concurrently, it's not necessarily going to allow any one client to consume all of those. It would be simple for individuals to DDOS a web server if that were the case. – Pete Apr 05 '13 at 20:03

1 Answers1

2

After many days of searching I found this post that solved my problem: Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)

The reason that the request was not hitting the server quicker was due too the my client side code and nothing to do with the server. By default C# only allows 2 concurrent requests. see here: http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.defaultconnectionlimit.aspx

I simply added this line of code and then all request came through in milliseconds.

System.Net.ServicePointManager.DefaultConnectionLimit = 50;

enter image description here

Community
  • 1
  • 1
retslig
  • 888
  • 5
  • 22
  • So did you also make the registry change in the post you referred to, or did you only change System.Net.ServicePointManager.DefaultConnectionLimit ? –  Oct 25 '13 at 01:51
  • No registry change was needed, that was strictly for IE (if I remember correctly) and this did not deal with that. – retslig Oct 25 '13 at 14:54