2

I have a program that retrieves thousands of lines of data from a visual foxpro table and I display this to the user. The user can then select as many checks as they want and I write a reverse amount to 10 different foxpro tables.

Based on the information that was selected by the user, I create a record for each selection and write it to the table using WCF.

Below is a snippet of code that I use to write back to the tables:

Task head = new TaskFactory().StartNew(() =>
       {

           CreateVoidHistHead(curChecks, i);
           //pass in _i which is defined within this routine.  Cannot have _i global.

           CreateApHead(curChecks, i);


           var checkRecAdjustment = CreateAdjustmentRec(curChecks, ckdate);
           var checkRecVoidChks = CreateVoidChksRec(curChecks);
           var checkGjAdj = CreateGjAdj(curChecks);
           var checkApSnpSht = CreateApsnpRec(curChecks);

           using (var coatsService = factory.CreateCoatsService())
           { 
               coatsService.InsertPrsnapc(_path, checkRecAdjustment);
               coatsService.InsertVoidchk(_path, checkRecVoidChks);
               coatsService.InsertGjadj(_path, checkGjAdj);
               coatsService.InsertApsnpsht(_path, checkApSnpSht);
           }
       });

As you can see in the code above I am creating a task to handle this call. I am creating 3 tasks total. The first task is inserting into one table, the task above is inserting into 6 tables, and the last task is inserting into 3 tables (all using similar logic as above).

Here is the speed issue I am having. When I select 10 checks the average time to write to these tables is 10 seconds. If I choose 20 checks the average time is 25 seconds, 50 checks = 1 minute and 100 checks is 10+ minutes. I even broke it down to the per selected item level. the average per item selected with 10 checks is 1.48sec 20 checks is 2.6sec, 50 checks is 6.8 seconds and anything over that is 12 seconds per check but the program stops for a few minutes and then it starts inserting again and stops and continues, etc until all checks are inserted. (The number of seconds per check is based on the stopwatch class, and I have this writing out to the debug window. I am not taking the total time and dividing it by the number of checks.)

I have tried 3 different scenarios:

  1. I created 3 tasks with all of my inserts like stated above. I selected 100 checks and I had a background timer that would check the status of all the tasks (in this case 300) and it would inform me when the tasks were completed. This took over ten minutes.

  2. I created 3 tasks with all of my inserts like stated above. I selected 100 checks and I wait for all of the tasks to complete before I go to the next check. This also took over ten minutes.

  3. I selected 100 checks and I do not have any tasks running. I am just using the main thread to write the information back to the tables. It is the same as the code in my task above, minus the task. The time took over 10 minutes to write to all of the tables.

As you can see I experienced no speed increase whether I had tasks and a background timer, tasks and waited on them all or no tasks.

Other piece of information - I have changed my app config to have maxconnections = 300 and listenbacklog = 300. I am not sure if this affects anything that I am doing, but I thought I would throw it out there.

Questions:

  1. Can anyone explain why the average time per check being inserted changes based on the number of checks selected? In my opinion I believe if I am selecting 10 checks or 100 checks it should be more consistent per check, not 1.48sec for the group of 10 and 12 seconds for the group of 100.

  2. It is intermittent of writing 15 checks or so and then it stops for 1 to 3 minutes, and then continues to process, and stops etc, why is it when I look at my output window, I see thread '' exited with code 259? I have looked Why am I seeing multiple "The thread 0x22c8 has exited with code 259 (0x103)." messages and I was unable to really solve my problem. When I click the link in the solution of that issue, it takes me to a microsoft forum that says the issue has been fixed in a future release of Visual Studio.

  3. Could this be a networking issue? Since I am trying to process 100's of inserts through WCF, could be the cause of my performance issue?

Any help pointing me in the right direction on how to resolve this performance issue whether it be network or WCF related is greatly appreciated. If there is additional information needed please let me know and I will update the ticket.

Community
  • 1
  • 1
John Janssen
  • 293
  • 2
  • 12
  • 30
  • 1
    Use a profiler. How many concurrent operations and tasks are there? – usr Sep 24 '14 at 20:22
  • As @usr pointed out, use a profiler. It will tell you exactly how long each method takes and what line(s) the bottleneck could be. At the very least, you should time each one of those calls to see which one is growing slower and slower more exponentially. This will at least give you a small notion of where the problem could be. But yea, a profiler will be your best friend. – TyCobb Sep 24 '14 at 20:56
  • Why are you using WCF to do this - is the VFP data remote? – Alan B Sep 25 '14 at 12:40
  • I found an issue in my code. I was calling the WCF service for each connection (since they are writing to different tables) and I was not disposing of these services after each call. I was starting a new service each time, and it kept piling up. Which explains why 10 or 20 checks was fast but 100 checks was slow... (that is over 1000 connections open with 100 checks!). After I wrapped a using around these calls, I am experiencing a much faster program. – John Janssen Sep 26 '14 at 11:03
  • We are experiencing an issue with our network along with not disposing, it was just a bad deal all around. Thank you for the suggestion for the profiler @usr it will help a lot. – John Janssen Sep 26 '14 at 11:05

1 Answers1

1

The answer to my question is both.

I do suggest using a profiler (thank you usr for the suggestion) to see where the bottle necks are.

My company was having network challenges which caused the entire network to slow down, but if anyone else has this issue, make sure you are disposing every connection.

My main problem was I was creating 100's - 1000's of connections and not surrounding them all with a using... I made a change and wrapped all connections within a using, and viola problem solved. Now I am getting similar performance between 50 checks and 100 checks to 350 checks.

For those of you who don't know how to use a using, here you go:

C#

    using (var coatsService = _factory.CreateCoatsService())
    {
        var cls = coatsService.GetPrdedtxts(_path);
        return cls.ToList();
    }

VB

    Using coatsService = _factory.CreateCoatsService()
        Dim cls = coatsService.GetPrdedtxts(_path)
        return cls.ToList()
    End Using
John Janssen
  • 293
  • 2
  • 12
  • 30